PostgreSQL/ Linux, way to have DNS forward to a port/cluster

dnslinuxpostgresql

Aws linux(centos) ec2
Host: mydb. 10.10.10.10.
3 clusters: ports 5432, 5433, 5434

What I'd like to see is:
db2.mydb or mydb/db2 connect/fwd to cluster2 (5433).

Anything Linux or psql come to mind? In mssql I'd have named instances as an option

To clarify the needs a little better:
These new instances (clusters) will be used by various people. QA as a team or individually, Dev as a team or individually. There could be a need for 3, 10 or 30 clusters at any given time. Some clusters will remain active for weeks and others for hours.

A friendly name is purely for the human element.
I (as dba) would rather see a list of 20 names than 20 port numbers. But I could create a data driven page to translate for me, just need to track the mapping in a table.
The clusters will be used by people and in connection strings for applications.
The friendly name is self documenting, so there isn't any worry about confusion as to what port 6221 is in a file. Last week port 6221 was a dev sandbox, today it's a special hotfix branch.

In AWS the option exists to just have 1 linux server instance per cluster. Likely even easier to maintain in the long term, but AWS charges by the hour for each server, so it seems like a nightmare to keep turning off/on these linux server instances… or you look at a way higher cost having 30 vs 1 server running.

On a local virtual env I would just have 1 Linux, 1 Cluster, 1 DB each. All default port and I can map dns to that servers IP no problem.

Did any of this information help clarify? Anything else I can add to help?

Best Answer

The PostgreSQL protocol cannot support hostname based virtual servers, because the client doesn't send the hostname of the server in the initial connection packet like HTTP does in the Host header.

Your only option is to add additional Elastic IP addresses. You will need to be able to tell which traffic came from which IP. Because EC2 instances don't have their public IPs directly associated with their network interfaces you can't do this the normal Linux way of just having multiple IP addresses for an interface. Instead you must create multiple interfaces and bind one IP to each interface.

You can then have PostgreSQL instances listen on port 5432 on each interface by setting listen_addresses to the interface's private internal IP address, not the public address.

You must also change the unix_socket_directories setting of each server so they put their listening sockets in a different directory, or set it to empty to disable unix socket connections. Otherwise they'll all try to grab the same socket filename in /tmp or /var/run/pgsql (depending on how Pg was compiled) and all but one will fail to start with an error.

Personally I would be using Amazon VPC for this task. It gives you much finer control over interface and address management than EC2 Classic does. It's also quite easy to script the addition/removal of interfaces via the EC2 APIs.

A handy advantage of using IPs for this is that you can discard the IP when you discard the cluster, and add a new different IP when you bring up a new cluster. So you shouldn't have to deal with the port-reuse confusion problem you outline in your question.

It's also worth noting that you could use a proxy like PgBouncer to listen on the different IPs and forward connections to Pg running on a single node on different ports, if you needed to do that for some reason.