The short answer here is "trial and error guided by monitoring and performance metrics".
There are some general rules of thumb that should help you find the vague area you should start in, but they're very general. The broad guidelines "number of CPUs plus number of independent has disks" is often cited, but it's only an incredibly coarse starting point.
What you really need to do is get robust performance metrics in place for your application. Start recording stats.
There isn't much in the way of integrated tooling for this. There are things like the nagios check_postgres
script, Cacti system performance counter logging, the PostgreSQL statistics collector, etc ... but there isn't much that puts it all together. Sadly, you'll have to do that bit yourself. For the PostgreSQL side, see monitoring in the PostgreSQL manual. Some third party options exist, like EnterpriseDB's Postgres Enterprise Monitor.
For the application-level metrics mentioned here you will want to record them in shared data structures or in an external non-durable DB like Redis and aggregate them either as you record them or before you write them to your PostgreSQL DB. Trying to log directly to Pg will distort your measurements with the overhead created by recording the measurements and make the problem worse.
The simplest option is probably a singleton in each app server that you use to record application stats. You probably want to keep a constantly updating min, max, n, total and mean; that way you don't have to store each stat point, just the aggregates. This singleton can write its aggregate stats to Pg every x minutes, a low enough rate that the performance impact will be minimal.
Start with:
What's the request latency? In other words, how long does the app take from getting a request from the client until it responds to the client. Record this in aggregate over a time period, rather than as individual records. Group it by request type; say, by page.
What's the database access delay for each query or query type the app executes? How long does it take from asking the DB for information / storing information until it's done and can move on to the next task? Again, aggregate these stats in the application and only write the aggregate info to the DB.
What's your throughput like? In any given x minutes, how many queries of each major class your app executes get serviced by the DB?
For that same time range of x minutes, how many client request were there?
Sampling every few seconds and aggregating over the same x minute windows in the DB, how many DB connections were there? How many of them were idle? How many were active? In inserts? Updates? selects? deletes? How many transactions were there over that period? See the statistics collector documentation
Again sampling and aggregating over the same time interval, what were the host system's performance metrics like? How many read and how many write disk IOs/second? Megabytes per second of disk reads and writes? CPU utilisation? Load average? RAM use?
You can now start learning about your app's performance by correlating the data, graphing it, etc. You'll start to see patterns, start to find bottlenecks.
You might learn that your system is bottle-necked on INSERT
and UPDATE
s at high transaction rates, despite quite low disk I/O in megabytes per second. This would be a hint that you need to improve your disk flush performance with a battery backed write-back caching RAID controller or some high-quality power-protected SSDs. You could also use synchronous_commit = off
if it's OK to lose a few transactions on server crash , and/or a commit_delay
, to take some of the syncing load off.
When you graph your transactions per second against the number of concurrent connections and correct for the varying request rate the application is seeing, you'll be able to get a better idea of where your throughput sweet spot is.
If you don't have fast flushing storage (BBU RAID or fast durable SSDs) you won't want more than a fairly small number of actively writing connections, maybe at most 2x the number of disks you have, probably fewer depending on RAID arrangement, disk performance, etc. In this case it isn't even worth trial and error; just upgrade your storage subsystem to one with fast disk flushes.
See pg_test_fsync
for a tool that'll help you determine if this might be a problem for you. Most PostgreSQL packages install this tool as part of contrib, so you shouldn't need to compile it. If you get less than a couple of thousand ops/second in pg_test_fsync
you urgently need to upgrade your storage system. My SSD-equipped laptop gets 5000-7000. My workstation at work with a 4-disk RAID 10 array of 7200rpm SATA disks and write-through (non-write-caching) gets about 80 ops/second in f_datasync
, down to 20 ops/second for fsync()
; it's hundreds of times slower. Compare: laptop with ssd vs workstation with write-through (non-write-caching) RAID 10. This laptop's SSD is cheap and I don't necessarily trust it to flush its write cache on power-loss; I keep good backups and wouldn't use it for data I care about. Good quality SSDs perform just as well if not better and are write-durable.
In the case of your application, I strongly advise you to look into:
- A good storage subsystem with fast flushes. I cannot stress this enough. Good quality power-fail-safe SSDs and/or a RAID controller with power-protected write-back cache.
- Using
UNLOGGED
tables for data you can afford to lose. Periodically aggregate it into logged tables. For example, keep games-in-progress in unlogged tables, and write the scores to ordinary durable tables.
- Using a
commit_delay
(less useful with fast-flushing storage - hint)
- Turning off
synchronous_commit
for transactions you can afford to lose (less useful with fast-flushing storage - hint hint)
- Partitioning tables, especially tables where data "ages out" and is cleaned up. Instead of deleting from a partitioned table, drop a partition.
- Partial indexes
- Reducing the number of indexes you create. Every index has a write cost.
- Batching work into bigger transactions
- Using read-only hot standby replicas to take the read load off the main DB
- Using a caching layer like memcached or redis for data that changes less often or can afford to be stale. You can use
LISTEN
and NOTIFY
to perform cache invalidation using triggers on PostgreSQL tables.
If in doubt: http://www.postgresql.org/support/professional_support/
Best Answer
There really isn't enough to go on here. 10 concurrent connections isn't a lot for most setups.
The best advice I can give is start simple and when you start to run into cases where performance is starting to become an issue then look at more complex solutions unless you know it will be an issue (say 1000 concurrent users and millions of queries from the web every day). Otherwise you build in complexity you may never need. However, by all means monitor and keep an eye out for scaling out solutions that work for you (I recommend keeping an eye on Slony and Postgres-XC in this regard in addition to other solutions). They are at the upper end of complexity but if you need something that can do anything you need it to, they are the solutions.
As it is though, generally for the number of concurrent connections you are talking about, there is no need to consider something complicated until you start to need it. Then you will have a better idea of what you need in a solution.
Premature optimization is the root of all evil.