Here is your original query
SELECT t.price
FROM commoddb.tbcommodprices t
WHERE t.ticker='LAV05 Comdty'
AND t.whichprice='last'
t.date = '2005-09-01';
Since there is but one table in this query, all you need is a good index.
ALTER TABLE commoddb.tbcommodprices ADD INDEX (ticker,whichprice,date);
This index should now be a permanent part of the table until you decide to remove it, which I don't think you want to do. To see the table and the indexes, just do this:
SHOW CREATE TABLE commoddb.tbcommodprices\G
Query
Your query is forced to scan the whole table (or the whole index). Every row could be another distinct unit. The only way to substantially shorten the process would be a separate table with all available units - which would help as long as there are substantially fewer units than entries in all_units
.
Since you have ~ 11k units (added in comment) for 25M entries, this should definitely help.
Depending on frequencies of values, there are a couple of query techniques to get your result considerably faster:
- recursive CTE
JOIN LATERAL
- correlated subquery
Details in this related answer on SO:
Only needing the implicit index of the primary key on (unit_id, unit_timestamp)
, this query should do the trick, using an implicit JOIN LATERAL
:
SELECT u.unit_id, a.max_ts
FROM unit u
, (SELECT unit_timestamp AS max_ts
FROM all_units
WHERE unit_id = u.unit_id
ORDER BY unit_timestamp DESC
LIMIT 1
) a;
Excludes units without entry in all_units
, like your original query.
Or a lowly correlated subquery (probably even faster):
SELECT u.unit_id
, (SELECT unit_timestamp
FROM all_units
WHERE unit_id = u.unit_id
ORDER BY unit_timestamp DESC
LIMIT 1) AS max_ts
FROM unit u;
Includes units without entry in all_units
.
Efficiency depends on the number of entries per unit. The more entries, the more potential for one of these queries.
In a quick local test with similar tables (500 "units", 1M rows in big table), the query with correlated subqueries was ~ 500x faster than your original. Index-only scans on the PK index of the big table vs. sequential scan in your original query.
Since your table tends to get even larger rapidly
, a materialized view is probably not an option.
There is also DISTINCT ON
as another possible query technique, but it's hardly going to be faster than your original query, so not the answer you are looking for. Details here:
Index
Your partial_idx
:
CREATE INDEX partial_idx ON all_units (unit_id, unit_timestamp DESC);
is not in fact a partial index and also redundant. Postgres can scan indexes backwards at practically the same speed, the PK serves well. Drop this additional index.
Table layout
A couple of points for your table definition.
CREATE TABLE all_units (
unit_timestamp timestamp,
unit_id int4,
lon float4,
lat float4,
speed float4,
status varchar(255), -- might be improved.
PRIMARY KEY (unit_id, unit_timestamp)
);
timestamp(6)
doesn't make much sense, it's effectively the same as just timestamp
, which already saves a maximum of 6 fractional digits.
I switched positions of the first two columns to save 4 bytes of padding, which amounts to ~ 100 MB for 25M rows (exact result depends on status
). Smaller tables are typically faster for everything.
If status
isn't free text, but some kind of standardized note, you could replace it with something a lot cheaper. More about varchar(255)
in Postgres.
Server configuration
You need to configure your server. Most of your settings seem to be conservative defaults. 1 MB on shared_buffers
or work_mem
seems way to low for an installation with millions of rows. And random_pare_cost = 4
is to high for any modern system with plenty of RAM. Start with the manual and the Postgres Wiki:
Best Answer
Since you're reading every record in the table, the database will simply pull everything you want, wrap it [all] up and send it back across the network to your client machine. The more data you pull, the larger that payload is going to be and the slower it's going to travel.
To me, these two statements makes no sense together.
If the data does not change, then you should read it once, perhaps at application startup, and hold it in the client application. Don't keep re-reading the same thing over and over again. That's pointless.
Then there's the question of what you're doing with all this data after you've retrieved it from the database. I sincerely hope you're not putting it all on a screen in front of a User - a million rows to read through? No thanks!
I think you need to explain your situation a little more.