When you update a row in PostgreSQL, it generally makes a copy of the entire row (not just the column that was updated) and marks the old row as deleted. The new copy is going to need to get WAL logged in its entirety. The old row is probably also going to be WAL logged in its entirety, on average, if you have full_page_writes turned on and you are checkpointing too closely together.
Almost all of the updated rows are probably going to need to update all of the indexes for it, as well. That is because the new version of the row won't fit on the same page as the old version, so the indexes have to know where to find the new version.
So you are logging the entire table twice (once for the old rows, once for the new ones) and all if its indexes as well. And WAL records have quite a bit of overhead. And if you have full_page_writes turned on and checkpoint frequently, that will make it even worse.
So what are your options to reduce the volume?
1) If many of your updates are degenerate (updated to the value they already have) you can suppress those updates with an additional where clause:
WITH table2_only_names AS (
SELECT id , name FROM table2
)
UPDATE table1
SET table2_name = table2_only_names.name
FROM table2_only_names
WHERE table1.table2_id = table2_only_names.id
AND table2_name is distinct from table2_only_names.name;
2) Most WAL files are extremely compressible. You can include a compresssion command in your archive_command, something like
archive_command = 'set -C -o pipefail; xz -2 -c %p > /backup/wal/%f.xz'
Of course you will have to make your recovery_command do the reverse.
3) Since you are using 9.5, you can try turning wal_compression on.
4) You could try turning off full_page_writes, although this does but your data at risk of corruption in the case of a crash, on most storage hardware. Or, if you have frequent checkpoints during this operation you could make checkpoints occur much less frequently, which will lessen the impact of having full_page_writes turned on.
You can safely replicate the whole instance using DMS, then remove the unnecessary DBs on both resulting clusters. After this, you will most possibly have excessive storage on the new instance - I think (but never tried) you can just do another such migration to a fitting (smaller) instance type.
Best Answer
This is an AWS DMS issue.
DMS has recently added a feature WAL heartbeat [1] (run dummy queries) for replication from a PostgreSQL source so idle logical replication slots do not hold on to old WAL logs which may result in storage full situations on the source. This heartbeat keeps restart_lsn moving and prevents storage full scenarios.
Under extra connection attribute, please add this:
heartbeatenable=Y;heartbeatFrequency=1
HeartbeatEnable – set to true (default is false) HeartbeatSchema – schema for heartbeat artifacts (default is public) HeartbeatFrequency – heartbeat frequency in minutes (default is 5 and the minimum value is 1)
Stopping the task alone will not clear replication slot, so the storage usage will still increase when the task is in the stopped state, you will need to delete task to clear the slot.
To clear up the replication slots use the following commands -
References: [1] PostgreSQL Source WAL Heartbeat https://docs.aws.amazon.com/dms/latest/userguide/CHAP_ReleaseNotes.html#CHAP_ReleaseNotes.DMS230