The short answer here is "trial and error guided by monitoring and performance metrics".
There are some general rules of thumb that should help you find the vague area you should start in, but they're very general. The broad guidelines "number of CPUs plus number of independent has disks" is often cited, but it's only an incredibly coarse starting point.
What you really need to do is get robust performance metrics in place for your application. Start recording stats.
There isn't much in the way of integrated tooling for this. There are things like the nagios check_postgres
script, Cacti system performance counter logging, the PostgreSQL statistics collector, etc ... but there isn't much that puts it all together. Sadly, you'll have to do that bit yourself. For the PostgreSQL side, see monitoring in the PostgreSQL manual. Some third party options exist, like EnterpriseDB's Postgres Enterprise Monitor.
For the application-level metrics mentioned here you will want to record them in shared data structures or in an external non-durable DB like Redis and aggregate them either as you record them or before you write them to your PostgreSQL DB. Trying to log directly to Pg will distort your measurements with the overhead created by recording the measurements and make the problem worse.
The simplest option is probably a singleton in each app server that you use to record application stats. You probably want to keep a constantly updating min, max, n, total and mean; that way you don't have to store each stat point, just the aggregates. This singleton can write its aggregate stats to Pg every x minutes, a low enough rate that the performance impact will be minimal.
Start with:
What's the request latency? In other words, how long does the app take from getting a request from the client until it responds to the client. Record this in aggregate over a time period, rather than as individual records. Group it by request type; say, by page.
What's the database access delay for each query or query type the app executes? How long does it take from asking the DB for information / storing information until it's done and can move on to the next task? Again, aggregate these stats in the application and only write the aggregate info to the DB.
What's your throughput like? In any given x minutes, how many queries of each major class your app executes get serviced by the DB?
For that same time range of x minutes, how many client request were there?
Sampling every few seconds and aggregating over the same x minute windows in the DB, how many DB connections were there? How many of them were idle? How many were active? In inserts? Updates? selects? deletes? How many transactions were there over that period? See the statistics collector documentation
Again sampling and aggregating over the same time interval, what were the host system's performance metrics like? How many read and how many write disk IOs/second? Megabytes per second of disk reads and writes? CPU utilisation? Load average? RAM use?
You can now start learning about your app's performance by correlating the data, graphing it, etc. You'll start to see patterns, start to find bottlenecks.
You might learn that your system is bottle-necked on INSERT
and UPDATE
s at high transaction rates, despite quite low disk I/O in megabytes per second. This would be a hint that you need to improve your disk flush performance with a battery backed write-back caching RAID controller or some high-quality power-protected SSDs. You could also use synchronous_commit = off
if it's OK to lose a few transactions on server crash , and/or a commit_delay
, to take some of the syncing load off.
When you graph your transactions per second against the number of concurrent connections and correct for the varying request rate the application is seeing, you'll be able to get a better idea of where your throughput sweet spot is.
If you don't have fast flushing storage (BBU RAID or fast durable SSDs) you won't want more than a fairly small number of actively writing connections, maybe at most 2x the number of disks you have, probably fewer depending on RAID arrangement, disk performance, etc. In this case it isn't even worth trial and error; just upgrade your storage subsystem to one with fast disk flushes.
See pg_test_fsync
for a tool that'll help you determine if this might be a problem for you. Most PostgreSQL packages install this tool as part of contrib, so you shouldn't need to compile it. If you get less than a couple of thousand ops/second in pg_test_fsync
you urgently need to upgrade your storage system. My SSD-equipped laptop gets 5000-7000. My workstation at work with a 4-disk RAID 10 array of 7200rpm SATA disks and write-through (non-write-caching) gets about 80 ops/second in f_datasync
, down to 20 ops/second for fsync()
; it's hundreds of times slower. Compare: laptop with ssd vs workstation with write-through (non-write-caching) RAID 10. This laptop's SSD is cheap and I don't necessarily trust it to flush its write cache on power-loss; I keep good backups and wouldn't use it for data I care about. Good quality SSDs perform just as well if not better and are write-durable.
In the case of your application, I strongly advise you to look into:
- A good storage subsystem with fast flushes. I cannot stress this enough. Good quality power-fail-safe SSDs and/or a RAID controller with power-protected write-back cache.
- Using
UNLOGGED
tables for data you can afford to lose. Periodically aggregate it into logged tables. For example, keep games-in-progress in unlogged tables, and write the scores to ordinary durable tables.
- Using a
commit_delay
(less useful with fast-flushing storage - hint)
- Turning off
synchronous_commit
for transactions you can afford to lose (less useful with fast-flushing storage - hint hint)
- Partitioning tables, especially tables where data "ages out" and is cleaned up. Instead of deleting from a partitioned table, drop a partition.
- Partial indexes
- Reducing the number of indexes you create. Every index has a write cost.
- Batching work into bigger transactions
- Using read-only hot standby replicas to take the read load off the main DB
- Using a caching layer like memcached or redis for data that changes less often or can afford to be stale. You can use
LISTEN
and NOTIFY
to perform cache invalidation using triggers on PostgreSQL tables.
If in doubt: http://www.postgresql.org/support/professional_support/
No, I don't think it's safe to assume locks from dead/vanished clients are released in a bounded and deterministic amount of time with all DBMSes and drivers. You'll need to investigate each configuration separately.
In the case of PostgreSQL you're generally but not always OK if you have TCP keepalives set quite aggressively, because:
- If the whole client application process dies but the client host stays up the host's kernel will
RST
the TCP connection as part of process cleanup;
- If the client host dies entirely then it'll stop responding to tcp keepalives; and
- If the client host remains alive but the network fails in one or both directions between client and server then it'll stop responding to tcp keepalives.
However, there are a few cases that will not be handled:
- Connection pool bugs that result in a connection being returned to the pool with a transaction still open and holding locks;
- Connection pools that don't
DISCARD ALL
and thus fail to release and reset session-level resources like advisory locks (if you use them);
- App server based applications that 'leak' connections with open transactions so the connection pool can never reclaim them;
- Badly written programs that intentionally hold a transaction open during user "think time" like a dialog box or data entry window, where the user might go away and make a coffee ... or go on holiday for a month;
- Cases where the application process remains in existence but is totally non-responsive due to being
SIGSTOP
ped, having been paused by a debugger, hitting an internal threading deadlock, etc. The OS will keep on responding to tcp keepalives but the app won't respond to Pg protocol messages or advance its work.
In the case of PostgreSQL you can use active lock monitoring to scan for and terminate long running transactions that haven't done anything in a while. In particular, you can deal with <IDLE> in transaction
sessions by scanning pg_stat_activity
(though it's only possible to do this RELIABLY and EASILY in 9.2). With a bit more effort you can use pg_locks
to watch for queries blocked on a lock for more than x seconds and kill the session holding the lock, though this can make it hard to run some DDL like index creation.
What you really need is application level keepalives, where the app says "Yup, I'm alive and responsive". These are rather harder to implement, though.
One thing that will help is that both PgBouncer and PgPool-II (external connection pools for PostgreSQL) support controls for session and transaction timeouts. We've wanted to implement similar options in the core PostgreSQL for some time, but nobody's come up with a design that's robust enough to handle all the corner cases, so for now your best bet is to use an external pooler. You can do this even if you're also using an application-level connection pool.
On the good news front, PostgreSQL automatically detects and breaks deadlocks between transactions, so one thing you don't have to worry about much is deadlocks at the SQL level when using PostgreSQL.
Best Answer
I don't think these are issued by PGPool. I think they are instigated by your application (which I am guessing is written in PHP and running over some sort of ORM).
The first entry is
DEALLOCATE
which means effectively to tell the server to free up memory from a prepared statement. These look like they are issued through some PDO module.The second and third queries look like they are ORM-related mapping queries. Your application is probably unware that these queries are being issued by the framework it is running on, but these queries only make sense really in an ORM or similar environment.