Using PG 9.1 on Ubuntu 12.04.
It currently takes up to 24h for us to run a large set of UPDATE
statements on a database, which are of the form:
UPDATE table
SET field1 = constant1, field2 = constant2, ...
WHERE id = constid
(We're just overwriting fields of objects identified by ID.) The values come from an external data source (not already in the DB in a table).
The tables have handfuls of indices each and no foreign key constraints.
No COMMIT is made till the end.
It takes 2h to import a pg_dump
of the entire DB. This seems like a
baseline we should reasonably target.
Short of producing a custom program that somehow reconstructs a data set
for PostgreSQL to re-import, is there anything we can do to bring the
bulk UPDATE performance closer to that of the import? (This is an area
that we believe log-structured merge trees handle well, but we're
wondering if there's anything we can do within PostgreSQL.)
Some ideas:
- dropping all non-ID indices and rebuilding afterward?
- increasing checkpoint_segments, but does this actually help sustained
long-term throughput? - using the techniques mentioned here? (Load new data as table, then
"merge in" old data where ID is not found in new data)
Basically there's a bunch of things to try and we're not sure what the
most effective are or if we're overlooking other things. We'll be
spending the next few days experimenting, but we thought we'd ask here
as well.
I do have concurrent load on the table but it's read-only.
Best Answer
Assumptions
Since information is missing in the Q, I'll assume:
COPY
output, with a uniqueid
per row to match the the target table.If not, format it properly first or use
COPY
options to deal with the format.That means no concurrent access. Else consider this related answer:
Solution
I suggest you go with a similar approach as outlined at the link from your third bullet. With major optimizations.
To create the temporary table, there is a simpler and faster way:
A single big
UPDATE
from a temporary table inside the database will be faster than individual updates from outside the database by several orders of magnitude.In PostgreSQL's MVCC model, an
UPDATE
means to create a new row version and mark the old one as deleted. That's about as expensive as anINSERT
and aDELETE
combined. Plus, it leaves you with a lot of dead tuples. Since you are updating the whole table anyway, it would be faster overall to just create a new table and drop the old one.If you have enough RAM available, set
temp_buffers
(only for this session!) high enough to hold the temp table in RAM - before you do anything else.To get an estimate how much RAM is needed, run a test with a small sample and use db object size functions:
Complete script
Concurrent load
Concurrent operations on the table (which I ruled out in the assumptions at the start) will wait, once the table is locked near the end and fail as soon as the transaction is committed, because the table name is resolved to its OID immediately, but the new table has a different OID. The table stays consistent, but concurrent operations may get an exception and have to be repeated. Details in this related answer:
UPDATE route
If you (have to) go the
UPDATE
route, drop any index that is not needed during the update and recreate it afterwards. It is much cheaper to create an index in one piece than to update it for every individual row. This may also allow for HOT updates.I outlined a similar procedure using
UPDATE
in this closely related answer on SO.