To be truly highly available you will need some logic built into the application to handle this. You can do this sort of thing with pg_pool, but what if pg_pool breaks?
I think there are many ways you could work this one, but the way I would do it would be:
Master DB (lets call it A), and two slaves (lets call them B and C). B pulls changes from A and C pulls changes from B in a cascading replication setup. This means if A fails and is a gonner promote B, no need to change C as its already a slave of B, when A is back you make it a slave of C and so on.
You will need to make changes to your application so that it has some intelligence in it regarding the state of the database. This would be typically done by having a layer that manages the connections, and it runs a test query (like "SELECT current_timestamp") if the query passes all is fine and it gives the connection to your application to use. Each application will need to know a list of servers to use in which order so the one that writes will use A first then B. So in the event the primary fails it tries the secondary (and potentially the third and so on), until it gets one where the test query works.
With this in place if C fails its only read only, the applications will just fail over to B or A and start using C if you get it back. If B fails, same deal though you might want to do something with C pointing it to A. If A fails then the system will go read only (but will still work for all read queries). There will then need to be some human intervention to say "yep A is gone", promote B (which will only take seconds), when B is promoted the writers will go though the connection layer, it will test A and fail test B and pass to the application will get a connection to B which is now a master so writes will be allowed.
I appreciate that this is still not a full HA solution (as there is still a manual step, but its a very simple and quick one), but I think we are going to have to wait for PostgreSQL 9.5 for that.
As I say I dont think there is one dead cert right answer to this, but this is how I would have done it.
Hope it helps
You need to work your way up the networking stack to determine where the issue is.
Can you ping the destination from the source?
Can you telnet to the postgres port(5432 by default) from the source?
Can you connect to the postgres service with a management tool(pgadmin or psql) from the source?
Best Answer
This is basic TCP/IP, rather than PostgreSQL-specific.
If the only things that will talk directly to the database server are on the same host as the database server, you only need
listen_addresses = 127.0.0.1
, i.e. listen only for local connections. For example, if you have pgbouncer proxying connections and accepting connections on a different port. Or if all user interaction with the DB happens via an application server and webapp.The address
127.0.0.1
is by convention reserved for the local host. It's for talking to yourself.There is no practical reason to allow (or block) connections to 127.0.0.1 from outside hosts, because no router will ever send packets to your host with that destination address, and your machine would never respond to ARP requests for that address. Most operating systems would ignore packets destined for 127.0.0.1 coming from external network interfaces as obviously bogus.
So you want to open connections from other hosts. 0.0.0.0/0 on port 5432 is "let anything connect to port 5432". That's an option if you don't want to be more restrictive about who can talk to the server.
Note that you may have to adjust both firewall rules and PostgreSQL's
listen_addresses
directive. Also possiblypg_hba.conf
.listen_addresses
controls which local network interfaces PostgreSQL will accept connection attempts from.pg_hba.conf
controls which remote hosts PostgreSQL will accept connection hosts from, assuming it's even listening.Firewall rules will stop those connection attempts even reaching PostgreSQL by filtering based on source address, destination port, and other criteria.