I have stumbled upon several cases of indices set on all columns of a table. In my mind this will be as efficient as the full table scan that would occur without the index. Is this correct or is there any benefit of such an index? I have seen this in Active Record on top of Postgres so this setup is my primary concern. I also wonder if and how this may differ between the different databases.
Postgresql – ny point with a combined index on all columns in a table
indexpostgresql
Related Question
- SQL Server Index Tuning – Making Index Work on Join to Table Variable
- Mysql – How to build index for MySQL table that inserts a lot and queries only most recent data
- Mysql – Index for date column along with multiple other columns in where clause
- PostgreSQL Index – Efficient Searches on JSON Data: GIN vs Multiple Indexed Columns
- PostgreSQL: Issue with a corrupted GiST index on ll_to_earth
- Mysql – Filtering a MySQL table on multiple columns, including one with spatial data
- PostgreSQL Performance – Index Scan on Large Table Taking Long Time
- Postgresql – Slow index scan
Best Answer
The index is a sorted structure - if you need only a sufficiently small portion of the data in the table, it could be fetched from the index more efficiently.
This needs a few prerequisites, though:
Also worth noting that even index-only scans must visit the underlying table (to be precise, its visibility map, and in cases the heap itself) to figure out if the rows to be returned are still visible to the current transaction. This has to be also considered when determining what 'sufficiently small' means above.