PostgreSQL – Handling Memory Allocation Issues When Forking New Processes

postgresqlpostgresql-performance

I've seen many articles and videos that say things like the following about Postgres over MySQL.

Postgres allocates a significant amount of memory (about 10MB) when it
forks a new process for each connection. This causes bloated memory
usage and effectively eats away at speed. Thus, it sacrifices speed
for data integrity and standards compliance. For a simple
implementation, then, Postgres would be a poor choice! – Sumo Logic

Every time I read that or hear that somewhere, there's no context about what it really means or if there is a way to handle it. What is are specific way to deal with that type of problem in PostgreSQL? Is this overcome by using connection pools?

Best Answer

Interesting to hear that 10MB is "a significant amount of memory".

A database is not a web server, which is optimized for serving lots of short-lived connections. A PostgreSQL connection loads cached catalog data for efficiency.

That is why you use a connection pool, so that all your short database requests are handled by a small number of persistent database connections.

I doubt that this is specific to PostgreSQL – other databases benefit from connection pools as well, and some even have one built into the server. So I would see the statement you quote as hate speech from a competitor who cannot think of anything better than reiterating the old myth that PostgreSQL is slow and complicated.