You should be just fine extending wait_timeout
Notice the maximum value for wait_timeout for MySQL 5.0, 5.1, 5.5, 5.6
- Linux : 31536000 seconds (365 days, one year)
- Windows : 2147483 seconds (2^31 milliseconds, 24 days 20 hours 31 min 23 seconds)
These maximums would not exist of mysqld could not handle them.
Connection pooling only saves on overhead in terms the alternative: destroying and reallocating memory for new DB Connections. This could cause the OS on the DB Server to engage in a lot more swapping to juggle memory.
I wrote about this over a year ago : How costly is opening and closing of a DB connection?
Lowering the wait_timeout could definitely throw Apache into confusion because there and PHP and Apache objects for DB Connections still instantiated while the DB Connection decides to terminate early and without warning. In that event, you may have to scan your netstat
for any TIME_WAIT
on connections over port 3306 or whatever port MySQL is using.
If such exist and impedes Apache from opening a new connection, you would have to kill the TIME_WAITs yourself. See my Feb 01, 2012
ServerFault post MySQL lowering wait_timeout value to lower number of open connections on how to do so.
I don't think you can do what you want here - certainly not without changes to PostgreSQL.
Tablespaces are not an independent datastore, and cannot be exchanged between different PostgreSQL instances. Making them so would be a huge change to PostgreSQL's core, with significant impacts on transactional DDL support, etc.
See this related answer I wrote on the topic, explaining some of the issues with attaching a tablespace to a different PostgreSQL instance: How do I access a old saved tablespace after reinstalling postgres?
If you were to create the entire database in a different tablespace then you'd be able to avoid the issues with pg_class
and TOAST
tables ... but you'd still have to deal with the problems with the pg_clog
, pg_xlog
, transaction visibility, etc.
pg_xlog
issues could potentially be solved if you were able to copy the tablespace at a point after a checkpoint, before any further writes were performed, so the tablespace was "clean". PostgreSQL doesn't currently offer any assistance to do this, but I think it'd be possible to add it with moderate code changes (at the cost of a stall in the production DB when you flushed it for the checkpoint and snapshotted it) by taking an exclusive lock on the internal xlog insert log. Possibly do-able even as an extension function.
The pg_clog
and pg_control
problem is a lot harder. Transaction IDs are global across the entire PostgreSQL instance. So your tablespace (or whatever other collection of tables less than an entire instance) will have transaction IDs in the future, and/or with different committed/rolled-back statuses in the local pg_clog
to what they had in the original pg_clog
. Badness ensues. The only way I can think of to deal with this is to VACUUM FREEZE
all tables in the database, which will set their xmin
and xmax
to FrozenTransactionId
, an xid defined in the source code that's always visible for all transactions. However, doing this without having unfrozen xids added in parallel requires that you take an EXCLUSIVE
lock (blocking INSERT
, UPDATE
and DELETE
) on all tables in the database for as long as it takes to VACUUM FREEZE
and then snapshot the tablespace.
So. Maybe it's possible to do what you want, but there's no way it's possible without production impact. You'd have to basically rewrite PostgreSQL to store per-database pg_clog
and pg_xlog
, keep a per-database pg_control
, and use per-database transaction IDs. Then add support for keeping these in a non-default tablespace. Once you'd done that, you'd have the exciting job of making this work with streaming replication and WAL-shipping based PITR, which currently relies on a single WAL stream as an ordered time-line of events. And you'd have to deal with the extra fsync()
costs, etc, of per-DB WAL. Finally, you'd need to figure out how you were going to make this work with PostgreSQL's shared relations, like pg_database
, pg_authid
, etc, that live in the pg_global
tablespace - they are transactional tables and participate in transactions along with tables in the current database, and you'd have to find a way to keep that working when you now had two separate transaction IDs (global and per-database) that had to commit or rollback atomically.
This is quite an interesting problem, and I suggest raising it on the pgsql-general mailing list. Link back here for context (and so people can explain all the things I got wrong in the above explanation).
Personally I'd really like to see PostgreSQL able to handle individual databases or tablespaces more independently, because I'd really like to be able to use replication to stream only a particular DB, or a subset of a DB. Right now that's just not possible, meaning you must do frustrating things like log-ship change data for high churn transaction tables and low-churn important archive tables together, or separate them into completely separate Pg instances. It doesn't look easy to do; the new logical replication and BDR feature should help, though.
Best Answer
No, it's still only 1GB of total memory.
The buffer is allocated as shared memory and it only allocated once, but shared between all server processes.
It's basically an "error" in the memory display of the top command, which simply reports the shared memory (that only exists once) for each process.