Ny reason to use index for very small tables (up to 1000 rows)

indexindex-tuning

During application development I have a lot of tables that store "small" amount of data (usually 10-40 values, id + value and sometimes type) which hold attributes for "objects", like fresh/rotten, red/green/blue for products.

I don't put this attributes to product table because electronic components can't be fresh, and oxygen gas can't be red and tables can't have unlimited rows count…

For storing attributes I use custom small tables where 2-3 field: id for linking, name for showing in application and sometime type if attributes groups in same category.

Primary "objects" is linked to attributes through intermediate many-to-many tables.

Is any reason to create and maintain indexes for those "small dictionaries" with less than 1000 items (usually 10-40)?

My target database is Oracle but I hope that answer vendor independent…

I fill – no, but have no technical skills to justify my filling…

Best Answer

Generally yes. In lack of an index the access pattern has to inspect every row, just to see if is the one you need or not. The issue is not the table size, but concurrency. Depending on your isolation level, your scans may block behind uncommitted transaction just to wait for rows that ultimately are 'uninteresting' to be unlocked. Because your scan is guaranteed to 'look' at every row, every scan will block behind any write (insert, delete or update). Oracle defaults to snapshot isolation which is OK in this situation (no blocking) but other vendors default to something else, eg. SQL Server will default to Read committed which does block.

With an index present your access pattern will only look at the relevant rows (within the desired key range) so the pure statistical chance of hitting a lock conflict is greatly reduced.