I'm looking to create an index on a large table (~50 million rows) on a field with lots of non-unique values.
Table schema looks like:
Column | Type | Modifiers | Storage | Stats target | Description
--------+-----------------------+-----------+----------+--------------+-------------
gid | character varying(20) | | extended | |
word | character varying(30) | | extended | |
stat | double precision | | plain | |
Has OIDs: no
I want to create an index on the 'word' column. There is a fairly regular pattern where each word appears about 1000 times. I need to do fast SELECT * FROM mytable WHERE word='something';
queries. Creating a regular B-Tree index on these tables takes a ton of time but does substantially improve performance.
I'm uncomfortable with my solution right now for several reasons
(1) The selection of a B-Tree index isn't particularly motivated. Are there alternative indexing schemes that perform better on fields with highly duplicated values?
(2) I'm in a production environment where these tables pop into and out of existence fairly regularly. Because not all tables will always be heavily queried I've opted to only build indexes on a table when certain (outside the DB) applications are triggered, such that I know 10k+ queries will be performed on the table+field. Waiting 20 minutes while an index is being created isn't ideal however. The situation is delicate; optimization gained by creating the index competes with the initial time-sink required to create the index. Are there 'cheaper' indexes to create? perhaps ones that will perform overall slightly worse than B-Tree but have less initial creation cost?
Best Answer
For starters
gid
should probably be a numeric type.integer
should be good enough orbigint
if the key space shouldn't be big enough. Much smaller footprint, faster processing than with character data, faster and smaller indexes.More importantly, to improve performance I suggest database normalization.
Quote:
Create a separate table for unique words:
Fill it with unique instances of
word
in yourbig_tbl
:ORDER BY
is optional, not needed for query at hand. But it speeds up index creation and might be cheaper overall.The table should be small in comparison: only ~ 50k rows for 50M rows in your big table.
Add indexes after filling the table:
If those are read-only tables, you can do without the pk. It's not relevant to the operations at hand.
Replace your big table with a much smaller new table. You may have to lock the big table to avoid concurrent writes. Concurrent reads are not a problem.
ORDER BY
clusters the data (once) making the query at hand faster, because far fewer blocks have to be read (unless your data is clustered mostly already). The sort carries a cost, weigh cost and benefit once more.Recreate indexes:
Your query looks like this now and should be faster:
Reorganization is meant to be a one-time operation to re-organize your data. Keep the new form and also consider keeping indexes permanently.
All of this together (including new indexes) should occupy about half of what you had before on disk, also cutting the time for creation in half (at least). Index creation should be considerably faster, the query as well. If RAM is a limiting factor, these modification pay double.
If you have to write to the table as well, it becomes more expensive (but you did not mention anything about that). You'd need to adjust your logic for
DELETE
/UPDATE
/INSERT
:Example for
INSERT
: Fetchword_id
for existing words or insert a new row inword
returning the new word_id. Details for this:How do I insert a row which contains a foreign key?