The two constraints that you have specified:
- Very low downtime
- Limited disk space use
conflict rather seriously.
You have (wisely, IMO) excluded the option of using pg_upgrade
with a custom rebuilt Pg, which is the only option I see that'd satisfy both those constraints.
I suspect you'll have to drop the disk space constraint. The only way out of this that I see is to configure Slony-I to replicate from your 8.3 database to a new 9.2 instance deployed either on new storage on the same machine, or on a separate machine. Allow Slony-I to bring the replica up to date, then shut the old server down, disable replication and cut over to the new server. "New server" here can be a new 9.2 cluster running on a different port on the existing server hardware, it doesn't have to be a new physical or virtual machine.
At this point you can use streaming replication - or Slony-I again - to replicate the data back to a 9.2 instance on the original server and fail back to it. The temporary server used for migration can be retired, or (preferably, if it's real hardware) configured to operate as a streaming replication slave and WAL archiving repository for PITR and failover purposes.
Slony-I is a bit of an experience to work with and places some constraints on how you can use the DB (mainly: All DDL must go through Slony, not direct DDL commands), so I recommend taking a dump of the 8.3 server, restoring it to a test workstation, and testing the proposed replication setup there before attempting it in production. You will need to test your application on 9.2 anyway, so this is a good chance to do both. Test it on 8.3-with-slony, and on 9.2.
This might be a good opportunity to see professional support from people who have experience dealing with these issues.
Straight backup and restore is obviously out. I also wouldn't consider replication of any kind.
Database mirroring is relatively simple to set up, but requires real-time connectivity between the two servers, setting up of partners and endpoints, etc. Availability Groups could be an option, but on top of the networking complications you also have to have both servers as members of the same WSFC - which means they must both be in the same domain. This is not a typical setup (or could even be made to work temporarily) for a data center move.
My vote would be for log shipping. The nice thing about this is that you can use the backups and log backups you're already taking (right?) and don't necessarily have to have real-time connectivity between the two databases - they don't need to know about each other, you don't need to set up endpoints for mirroring, partners, security, etc. You just need a way to get files from the old server to a place where they can be restored on the new server. You can take a full backup well in advance, get it over to the new server, restore it, then apply (possibly diffs and) incremental log backups from that point right up to the moment of cut-over. The process is actually quite simple, and there are many tutorials on log shipping available online if you come across any difficulties.
If the web app is moving with the database, since DNS can take a while to propagate, you might want to make a switch in the connection strings of the old app to make them point at the IP of the new database server once it is writable, since - even after the switch, and even if your TTL settings are tight - clients may continue to hit the old web servers. It all depends on how much respect their providers give your TTLs.
Best Answer
This is by no means the only way or necessarily the best way. You should consider streams and GoldenGate as well, but if at least one application is designed to use packages and access the data only through those packages, consider the following:
Yes, this is complex, and yes it would take a lot of time and testing, but given a requirement of zero down time, it should be expected.