My Postgresql server is using 100% cpu. I am looking to figure out whether individual queries are slow or I am simply hammering the server with too many requests to begin with.
Is there some sort of profiler I could attach that would let me know:
- What queries trigger the most of the CPU time, across multiple connections?
- For the above queries, what is the average runtime per query?
Best Answer
If you are on Linux or relatives, you can use
top
to see what the top processes are. If you hit 'c' within top it should show you the process titles, which will tell you if you have selects, updates, inserts, or if it is other maintenance tasks taking up the CPU (or if you have been hacked and are now mining cryptocurrencies for someone, which seems to be common lately). You can also see if there is a small number of processes each using about a single CPU each, or a large number all fighting over their slice of the CPU.You can use the
pg_stat_statements
to keep track of how many times each query was run and how long it took to run. This reports the wall-clock time, not the CPU time. If you turn ontrack_io_timing
, you can also get the time spent reading from disk, which is good to know but still doesn't give you CPU time as some time could go to waiting on locks, or waiting for a slice of CPU time.If the problem is more intermittent and hard to catch in the act, then
auto_explain
andlog_min_duration_statement
could help.For more advanced profiling, you can use system tools like
perf top
orstrace -y -p <pid>
and maybe evengdb
. These usually require quite a bit of knowledge of the PostgreSQL guts in order to interpret them. These are usually used to investigate ways to improve PostgreSQL itself (i.e. for future versions), but I often find they can help diagnose tuning problems as well. You would have to install the debug symbols to get maximum use ofperf
orgdb
.