I don't think you can do what you want here - certainly not without changes to PostgreSQL.
Tablespaces are not an independent datastore, and cannot be exchanged between different PostgreSQL instances. Making them so would be a huge change to PostgreSQL's core, with significant impacts on transactional DDL support, etc.
See this related answer I wrote on the topic, explaining some of the issues with attaching a tablespace to a different PostgreSQL instance: How do I access a old saved tablespace after reinstalling postgres?
If you were to create the entire database in a different tablespace then you'd be able to avoid the issues with pg_class
and TOAST
tables ... but you'd still have to deal with the problems with the pg_clog
, pg_xlog
, transaction visibility, etc.
pg_xlog
issues could potentially be solved if you were able to copy the tablespace at a point after a checkpoint, before any further writes were performed, so the tablespace was "clean". PostgreSQL doesn't currently offer any assistance to do this, but I think it'd be possible to add it with moderate code changes (at the cost of a stall in the production DB when you flushed it for the checkpoint and snapshotted it) by taking an exclusive lock on the internal xlog insert log. Possibly do-able even as an extension function.
The pg_clog
and pg_control
problem is a lot harder. Transaction IDs are global across the entire PostgreSQL instance. So your tablespace (or whatever other collection of tables less than an entire instance) will have transaction IDs in the future, and/or with different committed/rolled-back statuses in the local pg_clog
to what they had in the original pg_clog
. Badness ensues. The only way I can think of to deal with this is to VACUUM FREEZE
all tables in the database, which will set their xmin
and xmax
to FrozenTransactionId
, an xid defined in the source code that's always visible for all transactions. However, doing this without having unfrozen xids added in parallel requires that you take an EXCLUSIVE
lock (blocking INSERT
, UPDATE
and DELETE
) on all tables in the database for as long as it takes to VACUUM FREEZE
and then snapshot the tablespace.
So. Maybe it's possible to do what you want, but there's no way it's possible without production impact. You'd have to basically rewrite PostgreSQL to store per-database pg_clog
and pg_xlog
, keep a per-database pg_control
, and use per-database transaction IDs. Then add support for keeping these in a non-default tablespace. Once you'd done that, you'd have the exciting job of making this work with streaming replication and WAL-shipping based PITR, which currently relies on a single WAL stream as an ordered time-line of events. And you'd have to deal with the extra fsync()
costs, etc, of per-DB WAL. Finally, you'd need to figure out how you were going to make this work with PostgreSQL's shared relations, like pg_database
, pg_authid
, etc, that live in the pg_global
tablespace - they are transactional tables and participate in transactions along with tables in the current database, and you'd have to find a way to keep that working when you now had two separate transaction IDs (global and per-database) that had to commit or rollback atomically.
This is quite an interesting problem, and I suggest raising it on the pgsql-general mailing list. Link back here for context (and so people can explain all the things I got wrong in the above explanation).
Personally I'd really like to see PostgreSQL able to handle individual databases or tablespaces more independently, because I'd really like to be able to use replication to stream only a particular DB, or a subset of a DB. Right now that's just not possible, meaning you must do frustrating things like log-ship change data for high churn transaction tables and low-churn important archive tables together, or separate them into completely separate Pg instances. It doesn't look easy to do; the new logical replication and BDR feature should help, though.
Best Answer
Looks like you've figured out question 1 for yourself already (short answer: yes, use the latest 9.1.x release, and make sure the compile-time options are the same between the version on the old and new machine to be sure the data directory, and the machines should ideally be as similar as possible in order to be binary-compatible, e.g. both x86-64, similar glibc versions, etc.).
But about question 2:
You can look under the "base" subdirectory of the data directory, and you should see something like this:
Each of those directories with an integer as the directory name represents a database inside my PostgreSQL cluster. The integers (OIDs) in the directory name match the
oid
you would see from a query like:if you had the server running. I don't know of a trivial way to determine the database name from those OIDs without having the server running, but at least you know how many databases there are and how big they should be. And you should be able to figure out creation time too from checking
stat
. In my case, "1" was for "template1", "12292" was "template0", and the rest were various other databases.