As of >= 1.3.1
Barman supports backup from a standby replica (concurrent_backup
). Barman config, e.g. /etc/barman.d/standby.conf
looks like this:
[standby]
description = "Replica of main PostgreSQL DB"
ssh_command = ssh postgres@db02
conninfo = host=db02 user=postgres
backup_options = concurrent_backup
streaming_conninfo = host=db02 user=postgres
streaming_archiver = on
If your master is running on PostgreSQL <= 9.5 you'd have to install pgespresso extension (there are binary packages e.g. for Debian from PGDG APT repos). PostgreSQL 9.6 introduced native streaming API, there's no need for special extension.
On standby server make sure to configure archive_command
:
wal_level = hot_standby
archive_mode = on
archive_command = 'rsync -a %p barman@backup:/var/lib/barman/standby/incoming/%f'
the incoming directory should match
barman:~$ barman diagnose | grep incoming_wals_directory
Also on standby server update pg_hba.conf
(where 10.0.0.3
is ipaddress of barman server):
host all postgres 10.0.0.3/32 trust
And enable WAL files streaming:
barman~$ barman receive-wal standby
You can check your configuration using:
barman:~$ barman check standby
Server standby:
PostgreSQL: OK
wal_level: OK
directories: OK
retention policy settings: OK
backup maximum age: OK (no last_backup_maximum_age provided)
compression settings: OK
failed backups: OK (there are 0 failed backups)
minimum redundancy requirements: OK (have 1 backups, expected at least 0)
ssh: OK (PostgreSQL server)
pgespresso extension: OK
archive_mode: OK
archive_command: OK
continuous archiving: OK
archiver errors: OK
Then you should be ready to run full backup:
barman:~$ barman backup standby
Hire a professional.
If this is important production data that needs to be recovered quickly, your best strategy is to get an expert in as soon as possible to get the work done.
The work you will be focusing on is creating processes and procedures to make sure this doesn't happen in the future. And convincing management that the likely hood of it happen again based on those processes and procedures are as low as possible.
Best Answer
No, you can only run
pg_dump
against a running PostgreSQL server.You can avoid the necessity to recover the database whenever you need a
pg_dump
by setting up a streaming replication hot standby server withmax_standby_streaming_delay = -1
and run yourpg_dump
against that.