The very first thing to do is to make a copy of whatever data files you still have, and to keep it and any backups safe until long after your recovery effort is complete. Please read this (short) Wiki page:
http://wiki.postgresql.org/wiki/Corruption
Once you have done that, you can attempt various recovery strategies without fear that you will be worse off for the attempt, beyond the time required to try it. In general I recommend carefully following one of the techniques described in the documentation -- attempts to cut corners or to be creative often lead to corruption. Only a seasoned expert with a good understanding of PostgreSQL internals should attempt to deviate from the documented steps.
You didn't describe your backup strategy; details of what is available there may suggest alternatives you would not otherwise have.
Ultimately, if you have data of value which is not backed up, you may need to hand-edit the system tables to eliminate references to lost tablespace. This is not for the faint of heart. There are a number of companies with which you can contract for such services, many of whom have experience with recovery from catastrophic hardware failure like this.
http://www.postgresql.org/support/professional_support/
I am not affiliated with any of these companies.
When you issue an ALTER TABLE
in PostgreSQL it will take an ACCESS EXCLUSIVE
lock that blocks everything including SELECT
. However, this lock can be quite brief if the table doesn't require re-writing, no new UNIQUE
, CHECK
or FOREIGN KEY
constraints need expensive full-table scans to verify, etc.
If in doubt, you can generally just try it! All DDL in PostgreSQL is transactional, so it's quite fine to cancel an ALTER TABLE
if it takes too long and starts holding up other queries. The lock levels required by various commands are documented in the locking page.
Some normally-slow operations can be sped up to be safe to perform without downtime. For example, if you have table t
and you want to change column customercode integer NOT NULL
to text
because the customer has decided all customer codes must now begin with an X
, you could write:
ALTER TABLE t ALTER COLUMN customercode TYPE text USING ( 'X'||customercode::text );
... but that would lock the whole table for the re-write. So does adding a column with a DEFAULT
. It can be done in a couple of steps to avoid the long lock, but applications must be able to cope with the temporary duplication:
ALTER TABLE t ADD COLUMN customercode_new text;
BEGIN;
LOCK TABLE t IN EXCLUSIVE MODE;
UPDATE t SET customercode_new = 'X'||customercode::text;
ALTER TABLE t DROP COLUMN customercode;
ALTER TABLE t RENAME COLUMN customercode_new TO customercode;
COMMIT;
This will only prevent writes to t
during the process; the lock name EXCLUSIVE
is somewhat deceptive in that it excludes everything except SELECT
; the ACCESS EXCLUSIVE
mode is the only one that excludes absolutely everyting. See lock modes. There's a risk that this operation could deadlock-rollback due to the lock upgrade required by the ALTER TABLE
, but at worst you'll just have to do it again.
You can even avoid that lock and do the whole thing live by creating a trigger function on t
that whenever an INSERT
or UPDATE
comes in, automatically populates customercode_new
from customercode
.
There are also built-in tools like CREATE INDEX CONCURRENTLY
and ALTER TABLE ... ADD table_constraint_using_index
that're designed to allow DBAs to reduce exclusive locking durations by doing work more slowly in a concurrency-friendly way.
The pg_reorg
tool or its successor pg_repack
can be used for some table restructuring operations as well.
Best Answer
If you want to create text files containing the data, use the COPY command (or the
psql
command\copy
which creates the output files on the client, rather than on the server as theCOPY
command does).If you want to create SQL statements use pg_dump