Postgresql size of Slave is bigger than Master size

database-sizepostgresqlreplication

I have a Postgresql in master slave streaming replication. Master is located in four partitions (tablespaces).
df -h on Master shows me

Filesystem      Size  Used Avail Use% Mounted on
***
/dev/md121      880G  490G  346G  59% /mydb/1
/dev/md122      880G  613G  223G  74% /mydb/2
/dev/md123      880G  322G  514G  39% /mydb/3
/dev/md124      880G  506G  330G  61% /mydb/4

but on Slave it takes more disk space on /mydb/4 partition

Filesystem      Size  Used Avail Use% Mounted on
***
/dev/sdb        880G  613G  223G  74% /mydb/2
/dev/sda        880G  448G  388G  54% /mydb/1
/dev/sdc        880G  322G  513G  39% /mydb/3
/dev/sdd        880G  773G   63G  93% /mydb/4

And it grows. WAL files are located in /mydb/1. Where I was mistaken?

Config of Slave

wal_compression = on
autovacuum_naptime = 2s
autovacuum_analyze_scale_factor = 0
autovacuum_vacuum_scale_factor = 0
max_wal_senders = 5
autovacuum_analyze_threshold = 1000
checkpoint_timeout = 40min
temp_buffers = 3000MB
autovacuum_vacuum_threshold = 1000
autovacuum_vacuum_cost_delay = 100ms
wal_keep_segments = 1000
wal_level = hot_standby
autovacuum_vacuum_cost_limit = 5000
autovacuum_max_workers = 6
listen_addresses = '192.168.1.4'
max_wal_size = 100GB
hot_standby = on

Recovery.conf on Slave

standby_mode = 'on'
primary_conninfo = 'user=replication password=mysecretpassword host=master.mydomain.local port=5432 sslmode=prefer sslcompression=1 krbsrvname=postgres'

Config on Master

wal_compression = on
autovacuum_naptime = 2s
autovacuum_analyze_scale_factor = 0
autovacuum_vacuum_scale_factor = 0
max_wal_senders = 5
autovacuum_analyze_threshold = 1000
checkpoint_timeout = 40min
temp_buffers = 1GB
autovacuum_vacuum_threshold = 1000
autovacuum_vacuum_cost_delay = 100ms
wal_keep_segments = 6000
wal_level = hot_standby
autovacuum_vacuum_cost_limit = 5000
autovacuum_max_workers = 6
listen_addresses = '192.168.1.5'
cpu_index_tuple_cost = '0.0005'
wal_buffers = 16MB
checkpoint_completion_target = '0.9'
random_page_cost = 2
maintenance_work_mem = 32GB
max_wal_size = 60GB
synchronous_commit = false
work_mem = 2GB
cpu_tuple_cost = '0.001'
default_statistics_target = 500
effective_cache_size = 96GB

Best Answer

SOLVED. It was not DB problem. du command showed correct size of DB. Problem was in my BackUP system. It helds descriptors of deleted files. Restart of backup service solved problem

lsof | grep '(deleted)' shows deleted files process ID in second column.