First action
When facing corruption, before you do ANYTHING ELSE, take a complete file-system-level copy of the damaged database. See: http://wiki.postgresql.org/wiki/Corruption . Failure to do so destroys evidence about what caused the corruption, and means that if your repair efforts go badly and make things worse you can't undo them. Do not attempt any repair first.
You appear to have made the right choice an done that, assuming you really copied the whole datadir. However, you appear to have then messed with the copy. Before you do anything else, make a copy of that damaged datadir somewhere safe and do not touch it again. This is your hope of recovery. Never work on this copy - duplicate it, and test recovery attempts on the duplicate.
Don't trust that server!
BTW, I strongly recommend that you stop using this server:
We had some disk corruption on our server
is not ok. Until you know why that happened you should not be using that server. Retire it or put it aside and get some trustworthy hardware.
If you cannot do that, make absolutely certain you're doing at least daily logical backups and streaming replication with WAL archiving to a secondary server. Treat the faulty server as if it might vanish or eat your data again at any time.
If the disk corruption corresponded with a power failure, it's probably due to unsafe write-back caching or a system that's ignoring disk flush requests. This is why I do plug-pull testing on server deployments, and don't buy cheap SSDs.
Backups?
This is the point where I tell you that you need to restore from those backups you've been making and testing regularly, preferably the point-in-time recovery or streaming replication setup.
If that was an option you wouldn't be posting here, though.
Ask for help in the right place
Once you have a safe copy of the datadir set aside, post for help on the pgsql-general mailing list.
If the data is important and hard to recover, be prepared to pay for data recovery / repair expertise. See http://www.postgresql.org/support/professional_support/ . (I work for one of the listed companies, just by way of fair disclosure).
Corruption cases tend to be somewhat unique and require lots of back-and-forth, so they're not usually a good fit for Stack Overflow.
Internal structure
As for the internals of the files in base/ ... you really need the system catalogs to interpret them usefully. The table structure is documented in PostgreSQL internals.
The structure of individual relation extents is basically a header, followed by a bunch of columns that're interpreted based on the system catalogs. If you've lost the system catalogs you've got no reliable way to tell what each column's type and name is, etc.
The other problem you have is that you've lost the transaction commit logs (pg_clog
) that keep a record of open, commited, and rolled back transactions. With that data lost, you will need to do a dirty read of the tables in order to recover any data, because you no longer know which tuples were added by transactions that later rolled back, which are deleted, which are old versions of updated tuples, etc.
Recovery?
... will be very hard.
In theory you might be able to read the tuples out of the heap table extents. I am not aware of any tools to do this. You would need to be able to construct new system catalogs that matched the on-disk structure of the tables, probably with a stand-alone PostgreSQL backend (postgres --single
).
I'd like it if PostgreSQL had better recovery options, but frankly, we prefer to have good backups and use streaming replication etc to avoid the need in the first place. Repairing DB corruption is always iffy and results in untrustworthy, mangled data. So in general - don't do that.
It might help if you still have a copy of your data directory from before you ran pg_resetxlog
.
Prevention
I wrote a bit on corruption prevention a while ago. See this post on my old blog.
If you don't want to interfere with other activity, the one UPDATE at a time in autocommit mode is very likely the best option. You should probably set synchronous_commit=off in that session (and only that session).
The indexes are going to slow you down, perhaps by a lot depending on your RAM and your IO system. But if the index is necessary for the other actions you don't want to interfere with, then there isn't anything you can do about it.
But since the fid is not yet correctly populated, the index on it is probably not actually useful to the concurrent processes you want avoid interfering with, as they haven't been changed yet to rely on that column being accurate. If that is the case, you can drop that index to gain speed, and build it in bulk later. The same probably applies to the foreign key constraint.
Once that index is gone, your updates can proceed via HOT (Heap Only Tuples) updates provided each block has enough free space. In that case, the updates will not have to do maintenance on the primary key index, either, saving that much more IO. To maximize the likelihood that this will work optimally, it is important that each UPDATE be its own transaction. That way one UPDATE can reuse space freed up by an earlier one.
Also, your WHERE clause should probably be like:
WHERE id=345 and fid is not null;
That way if the script gets interrupted, you can re-run it with minimal damage.
Since you seem to be running this on a test system already, then an EXPLAIN (ANALYZE,BUFFERS) of some of the updates would be helpful, especially with track_io_timing set to on.
Best Answer
Use the concatenation operator
Details in the manual: https://www.postgresql.org/docs/current/9.1/functions-array.html