The short answer here is "trial and error guided by monitoring and performance metrics".
There are some general rules of thumb that should help you find the vague area you should start in, but they're very general. The broad guidelines "number of CPUs plus number of independent has disks" is often cited, but it's only an incredibly coarse starting point.
What you really need to do is get robust performance metrics in place for your application. Start recording stats.
There isn't much in the way of integrated tooling for this. There are things like the nagios check_postgres
script, Cacti system performance counter logging, the PostgreSQL statistics collector, etc ... but there isn't much that puts it all together. Sadly, you'll have to do that bit yourself. For the PostgreSQL side, see monitoring in the PostgreSQL manual. Some third party options exist, like EnterpriseDB's Postgres Enterprise Monitor.
For the application-level metrics mentioned here you will want to record them in shared data structures or in an external non-durable DB like Redis and aggregate them either as you record them or before you write them to your PostgreSQL DB. Trying to log directly to Pg will distort your measurements with the overhead created by recording the measurements and make the problem worse.
The simplest option is probably a singleton in each app server that you use to record application stats. You probably want to keep a constantly updating min, max, n, total and mean; that way you don't have to store each stat point, just the aggregates. This singleton can write its aggregate stats to Pg every x minutes, a low enough rate that the performance impact will be minimal.
Start with:
What's the request latency? In other words, how long does the app take from getting a request from the client until it responds to the client. Record this in aggregate over a time period, rather than as individual records. Group it by request type; say, by page.
What's the database access delay for each query or query type the app executes? How long does it take from asking the DB for information / storing information until it's done and can move on to the next task? Again, aggregate these stats in the application and only write the aggregate info to the DB.
What's your throughput like? In any given x minutes, how many queries of each major class your app executes get serviced by the DB?
For that same time range of x minutes, how many client request were there?
Sampling every few seconds and aggregating over the same x minute windows in the DB, how many DB connections were there? How many of them were idle? How many were active? In inserts? Updates? selects? deletes? How many transactions were there over that period? See the statistics collector documentation
Again sampling and aggregating over the same time interval, what were the host system's performance metrics like? How many read and how many write disk IOs/second? Megabytes per second of disk reads and writes? CPU utilisation? Load average? RAM use?
You can now start learning about your app's performance by correlating the data, graphing it, etc. You'll start to see patterns, start to find bottlenecks.
You might learn that your system is bottle-necked on INSERT
and UPDATE
s at high transaction rates, despite quite low disk I/O in megabytes per second. This would be a hint that you need to improve your disk flush performance with a battery backed write-back caching RAID controller or some high-quality power-protected SSDs. You could also use synchronous_commit = off
if it's OK to lose a few transactions on server crash , and/or a commit_delay
, to take some of the syncing load off.
When you graph your transactions per second against the number of concurrent connections and correct for the varying request rate the application is seeing, you'll be able to get a better idea of where your throughput sweet spot is.
If you don't have fast flushing storage (BBU RAID or fast durable SSDs) you won't want more than a fairly small number of actively writing connections, maybe at most 2x the number of disks you have, probably fewer depending on RAID arrangement, disk performance, etc. In this case it isn't even worth trial and error; just upgrade your storage subsystem to one with fast disk flushes.
See pg_test_fsync
for a tool that'll help you determine if this might be a problem for you. Most PostgreSQL packages install this tool as part of contrib, so you shouldn't need to compile it. If you get less than a couple of thousand ops/second in pg_test_fsync
you urgently need to upgrade your storage system. My SSD-equipped laptop gets 5000-7000. My workstation at work with a 4-disk RAID 10 array of 7200rpm SATA disks and write-through (non-write-caching) gets about 80 ops/second in f_datasync
, down to 20 ops/second for fsync()
; it's hundreds of times slower. Compare: laptop with ssd vs workstation with write-through (non-write-caching) RAID 10. This laptop's SSD is cheap and I don't necessarily trust it to flush its write cache on power-loss; I keep good backups and wouldn't use it for data I care about. Good quality SSDs perform just as well if not better and are write-durable.
In the case of your application, I strongly advise you to look into:
- A good storage subsystem with fast flushes. I cannot stress this enough. Good quality power-fail-safe SSDs and/or a RAID controller with power-protected write-back cache.
- Using
UNLOGGED
tables for data you can afford to lose. Periodically aggregate it into logged tables. For example, keep games-in-progress in unlogged tables, and write the scores to ordinary durable tables.
- Using a
commit_delay
(less useful with fast-flushing storage - hint)
- Turning off
synchronous_commit
for transactions you can afford to lose (less useful with fast-flushing storage - hint hint)
- Partitioning tables, especially tables where data "ages out" and is cleaned up. Instead of deleting from a partitioned table, drop a partition.
- Partial indexes
- Reducing the number of indexes you create. Every index has a write cost.
- Batching work into bigger transactions
- Using read-only hot standby replicas to take the read load off the main DB
- Using a caching layer like memcached or redis for data that changes less often or can afford to be stale. You can use
LISTEN
and NOTIFY
to perform cache invalidation using triggers on PostgreSQL tables.
If in doubt: http://www.postgresql.org/support/professional_support/
You are not doing anything wrong per se, and the documentation is not wrong. But the initscript from CentOS doesn't support you doing things this way, so you have to change the initscripts to not check for these files, since PostgreSQL will check for it itself and just try to start the database even if these files are not there.
So to test it out, comment out the if, then and the whole else block and just let it run the ...
part of the script. PostgreSQL should read the settings correctly and start itself without a problem. And you might of course inform the people responsible for the CentOS package that they should do it differently. If it doesn't run, you should get a reasonable error message from PostgreSQL itself.
Best Answer
There is no priority to choose because they have different purposes.
password_encryption
in postgresql.conf tells how to hash a new password when it's changed or a new user is created with a password.the field in
pg_hba.conf
tells what kind of authentication scheme should be used when a client that matches the rules attempts to connect.When connecting with a client that does not support SCRAM authentication:
If the first line that matches in
pg_hba.conf
for this connection attempt has, in the METHOD field:scram-sha-256
, the connection will be rejected.md5
and the password of this account is stored with anmd5
hash (independently ofpassword_encryption
), the connection will succeed.md5
and the password of this account is stored with anscram-sha256
hash (independently ofpassword_encryption
), the connection will be rejected.A superuser can check what kind of password is assigned to existing accounts by looking at the hashed passwords in the system table
pg_catalog.pg_authid
.