Trace flag 834 can allow the maximum amount of memory to be allocated at startup for x64 machines. The following blog has more detail but here are the basics:
Enterprise Edition needs to be installed
8GB of RAM or more needs to be present
Lock Pages in Memory needs to be on.
http://blogs.msdn.com/b/psssql/archive/2009/06/05/sql-server-and-large-pages-explained.aspx
Read the full article as there are caveats and warning spread throughout as this can lead to a much longer startup time (or failed starts). A big one being that it needs to allocate a contiguous chunk of memory and I'm not sure how that will work on VMWare. Also, if the physical memory in the host is 128GB I would reduce max server memory some more to make sure there's enough space left for VMWare to do it's thing.
I don't think you can do what you want here - certainly not without changes to PostgreSQL.
Tablespaces are not an independent datastore, and cannot be exchanged between different PostgreSQL instances. Making them so would be a huge change to PostgreSQL's core, with significant impacts on transactional DDL support, etc.
See this related answer I wrote on the topic, explaining some of the issues with attaching a tablespace to a different PostgreSQL instance: How do I access a old saved tablespace after reinstalling postgres?
If you were to create the entire database in a different tablespace then you'd be able to avoid the issues with pg_class
and TOAST
tables ... but you'd still have to deal with the problems with the pg_clog
, pg_xlog
, transaction visibility, etc.
pg_xlog
issues could potentially be solved if you were able to copy the tablespace at a point after a checkpoint, before any further writes were performed, so the tablespace was "clean". PostgreSQL doesn't currently offer any assistance to do this, but I think it'd be possible to add it with moderate code changes (at the cost of a stall in the production DB when you flushed it for the checkpoint and snapshotted it) by taking an exclusive lock on the internal xlog insert log. Possibly do-able even as an extension function.
The pg_clog
and pg_control
problem is a lot harder. Transaction IDs are global across the entire PostgreSQL instance. So your tablespace (or whatever other collection of tables less than an entire instance) will have transaction IDs in the future, and/or with different committed/rolled-back statuses in the local pg_clog
to what they had in the original pg_clog
. Badness ensues. The only way I can think of to deal with this is to VACUUM FREEZE
all tables in the database, which will set their xmin
and xmax
to FrozenTransactionId
, an xid defined in the source code that's always visible for all transactions. However, doing this without having unfrozen xids added in parallel requires that you take an EXCLUSIVE
lock (blocking INSERT
, UPDATE
and DELETE
) on all tables in the database for as long as it takes to VACUUM FREEZE
and then snapshot the tablespace.
So. Maybe it's possible to do what you want, but there's no way it's possible without production impact. You'd have to basically rewrite PostgreSQL to store per-database pg_clog
and pg_xlog
, keep a per-database pg_control
, and use per-database transaction IDs. Then add support for keeping these in a non-default tablespace. Once you'd done that, you'd have the exciting job of making this work with streaming replication and WAL-shipping based PITR, which currently relies on a single WAL stream as an ordered time-line of events. And you'd have to deal with the extra fsync()
costs, etc, of per-DB WAL. Finally, you'd need to figure out how you were going to make this work with PostgreSQL's shared relations, like pg_database
, pg_authid
, etc, that live in the pg_global
tablespace - they are transactional tables and participate in transactions along with tables in the current database, and you'd have to find a way to keep that working when you now had two separate transaction IDs (global and per-database) that had to commit or rollback atomically.
This is quite an interesting problem, and I suggest raising it on the pgsql-general mailing list. Link back here for context (and so people can explain all the things I got wrong in the above explanation).
Personally I'd really like to see PostgreSQL able to handle individual databases or tablespaces more independently, because I'd really like to be able to use replication to stream only a particular DB, or a subset of a DB. Right now that's just not possible, meaning you must do frustrating things like log-ship change data for high churn transaction tables and low-churn important archive tables together, or separate them into completely separate Pg instances. It doesn't look easy to do; the new logical replication and BDR feature should help, though.
Best Answer
In general, a single instance is more efficient, but has inferior performance isolation between databases.
If you want maximum throughput and best resource utilisation, put them all in a single instance. One busy DB will be able to affect other DBs more easily this way, especially when it comes to things like high rates of small write transactions. Features like async commit, commit delay, etc, help. If you group them all on one instance you'll also have to do any physical backups and streaming replication / PITR for the whole set of DBs, you can't isolate just one DB.
If you want to reduce the impact one busy/overloaded DB has on another DB, isolate them into their own instances and put each on a separate file system, perferably on separate underlying storage. You'll pay a price in wasted
shared_buffers
space, extrafsync()
s, etc, and you'll find things like user management a major pain, but you'll be able to do streaming replication / physical backups of just one DB and you'll be more able to manage their relative resource allocations. If they're separate instances you can use things likenice
,ionice
, etc more effectively and can manage storage quotas somewhat better.