I strongly recommend doing a dump and reload.
Don't put tablespaces on removable media, it doubles your chances of data loss. If either the main database on the main disk or the removable media fails / is lost, your data's gone. A tablespace cannot be read without the database it's from.
Looks like you've figured out question 1 for yourself already (short answer: yes, use the latest 9.1.x release, and make sure the compile-time options are the same between the version on the old and new machine to be sure the data directory, and the machines should ideally be as similar as possible in order to be binary-compatible, e.g. both x86-64, similar glibc versions, etc.).
But about question 2:
Is there a way to extract some basic information (such as database names, maybe even sizes or timestamps)
You can look under the "base" subdirectory of the data directory, and you should see something like this:
/datadir/base $ du -sh *
6.5M 1
6.1M 12292
6.5M 12297
39M 3237152
6.3M 371336
10M 4049006
41M 4481732
6.7M 4691247
10M 4927721
7.2M 4927722
7.4M 58068
0B pgsql_tmp
Each of those directories with an integer as the directory name represents a database inside my PostgreSQL cluster. The integers (OIDs) in the directory name match the oid
you would see from a query like:
SELECT oid, datname FROM pg_database;
if you had the server running. I don't know of a trivial way to determine the database name from those OIDs without having the server running, but at least you know how many databases there are and how big they should be. And you should be able to figure out creation time too from checking stat
. In my case, "1" was for "template1", "12292" was "template0", and the rest were various other databases.
Best Answer
I have completed the migration with no problems.
Creating the dump is easy:
Restoring on a new instance: first, create a new tablespace, and a target database in that tablespace. Then import your dump like this:
You can adjust number of concurrent jobs dependent on Your hardware - processor cores, I/O subsystem performance, etc.
Now we have all the data in another database, with different name, in a dedicated tablespace, on a new server running newer PostgreSQL.