When you update a row in PostgreSQL, it generally makes a copy of the entire row (not just the column that was updated) and marks the old row as deleted. The new copy is going to need to get WAL logged in its entirety. The old row is probably also going to be WAL logged in its entirety, on average, if you have full_page_writes turned on and you are checkpointing too closely together.
Almost all of the updated rows are probably going to need to update all of the indexes for it, as well. That is because the new version of the row won't fit on the same page as the old version, so the indexes have to know where to find the new version.
So you are logging the entire table twice (once for the old rows, once for the new ones) and all if its indexes as well. And WAL records have quite a bit of overhead. And if you have full_page_writes turned on and checkpoint frequently, that will make it even worse.
So what are your options to reduce the volume?
1) If many of your updates are degenerate (updated to the value they already have) you can suppress those updates with an additional where clause:
WITH table2_only_names AS (
SELECT id , name FROM table2
)
UPDATE table1
SET table2_name = table2_only_names.name
FROM table2_only_names
WHERE table1.table2_id = table2_only_names.id
AND table2_name is distinct from table2_only_names.name;
2) Most WAL files are extremely compressible. You can include a compresssion command in your archive_command, something like
archive_command = 'set -C -o pipefail; xz -2 -c %p > /backup/wal/%f.xz'
Of course you will have to make your recovery_command do the reverse.
3) Since you are using 9.5, you can try turning wal_compression on.
4) You could try turning off full_page_writes, although this does but your data at risk of corruption in the case of a crash, on most storage hardware. Or, if you have frequent checkpoints during this operation you could make checkpoints occur much less frequently, which will lessen the impact of having full_page_writes turned on.
First of all your replication is not synchronized. That means there can be always slight differences between master and slave. if you change configuration like below your master won't return commit until the replication processes.
synchronous_standby_names = '*'
In your case, synchronized replication can be problematic because your slave is quite distant from the master so that the commits get slower.
Still with synchronized commit running a query 2 times in different transactions can get different results because of data changes.
I recommend you to read the blog; Evolution of Fault Tolerance in PostgreSQL: Synchronous Commit.
Best Answer
The database generates WAL for the entire database instance. It is the logical sender's job to sort them out to restrict and construct what gets sent. I don't think there is a way to get the logical sender to work ahead. You should look at why it is falling behind, not why there is a lot of WAL.