Before doing anything else, read and act on: http://wiki.postgresql.org/wiki/Corruption .
Most likely you have disk or file system problems.
If you suspect any kind of DB corruption for whatever reason you should stop the DB and copy the entire database at the file system level before attempting any recovery.
Once you've done that, then you can look into possible repairs. You'll probably have some significant data loss, so your goal should be to get it working to the point where you can pg_dump
the damaged databases, re-initdb, and reload.
If you have a recent backup, now would be a good time to think about using it.
First action
When facing corruption, before you do ANYTHING ELSE, take a complete file-system-level copy of the damaged database. See: http://wiki.postgresql.org/wiki/Corruption . Failure to do so destroys evidence about what caused the corruption, and means that if your repair efforts go badly and make things worse you can't undo them. Do not attempt any repair first.
You appear to have made the right choice an done that, assuming you really copied the whole datadir. However, you appear to have then messed with the copy. Before you do anything else, make a copy of that damaged datadir somewhere safe and do not touch it again. This is your hope of recovery. Never work on this copy - duplicate it, and test recovery attempts on the duplicate.
Don't trust that server!
BTW, I strongly recommend that you stop using this server:
We had some disk corruption on our server
is not ok. Until you know why that happened you should not be using that server. Retire it or put it aside and get some trustworthy hardware.
If you cannot do that, make absolutely certain you're doing at least daily logical backups and streaming replication with WAL archiving to a secondary server. Treat the faulty server as if it might vanish or eat your data again at any time.
If the disk corruption corresponded with a power failure, it's probably due to unsafe write-back caching or a system that's ignoring disk flush requests. This is why I do plug-pull testing on server deployments, and don't buy cheap SSDs.
Backups?
This is the point where I tell you that you need to restore from those backups you've been making and testing regularly, preferably the point-in-time recovery or streaming replication setup.
If that was an option you wouldn't be posting here, though.
Ask for help in the right place
Once you have a safe copy of the datadir set aside, post for help on the pgsql-general mailing list.
If the data is important and hard to recover, be prepared to pay for data recovery / repair expertise. See http://www.postgresql.org/support/professional_support/ . (I work for one of the listed companies, just by way of fair disclosure).
Corruption cases tend to be somewhat unique and require lots of back-and-forth, so they're not usually a good fit for Stack Overflow.
Internal structure
As for the internals of the files in base/ ... you really need the system catalogs to interpret them usefully. The table structure is documented in PostgreSQL internals.
The structure of individual relation extents is basically a header, followed by a bunch of columns that're interpreted based on the system catalogs. If you've lost the system catalogs you've got no reliable way to tell what each column's type and name is, etc.
The other problem you have is that you've lost the transaction commit logs (pg_clog
) that keep a record of open, commited, and rolled back transactions. With that data lost, you will need to do a dirty read of the tables in order to recover any data, because you no longer know which tuples were added by transactions that later rolled back, which are deleted, which are old versions of updated tuples, etc.
Recovery?
... will be very hard.
In theory you might be able to read the tuples out of the heap table extents. I am not aware of any tools to do this. You would need to be able to construct new system catalogs that matched the on-disk structure of the tables, probably with a stand-alone PostgreSQL backend (postgres --single
).
I'd like it if PostgreSQL had better recovery options, but frankly, we prefer to have good backups and use streaming replication etc to avoid the need in the first place. Repairing DB corruption is always iffy and results in untrustworthy, mangled data. So in general - don't do that.
It might help if you still have a copy of your data directory from before you ran pg_resetxlog
.
Prevention
I wrote a bit on corruption prevention a while ago. See this post on my old blog.
Best Answer
About this error:
You could issue
SET zero_damaged_pages TO ON
and retry. It will not rewrite any page to disk, just in memory. Assuming it works, the idea is to runCOPY bad TO 'file'
to dump as much valuable data as possible.(that's also assuming that
11975439
is the OID of a table, not an index. If it's an index, you may just drop it).