ALTER TABLE
requires an ACCESS EXCLUSIVE
lock. From the doc:
ALTER TABLE changes the definition of an existing table. There are
several subforms described below. Note that the lock level required
may differ for each subform. An ACCESS EXCLUSIVE lock is held unless
explicitly noted. When multiple subcommands are listed, the lock
held will be the strictest one required from any subcommand
It does not normally take several minutes to add a column if there is no default value or if the table is small, and in your case, you're good on both.
But in order to be granted that lock, other transactions should not hold locks on the same table. There are several things to consider:
- A simple
SELECT
on a table requires a lock that will block others from doing an ALTER TABLE
.
- Locks are released only at the end of their transaction, not at the end of the instruction that needed them.
- As soon as an
ALTER TABLE ...ADD COLUMN
is waiting for a lock, other transactions that want to SELECT
the table are going to be blocked too.
The consequences are:
if there are long running transactions that read the table, they will block the ALTER TABLE
of the migration until they're finished.
if the migration itself is part of larger transaction, this will block other processes that want to access the table, until it commits.
When you update a row in PostgreSQL, it generally makes a copy of the entire row (not just the column that was updated) and marks the old row as deleted. The new copy is going to need to get WAL logged in its entirety. The old row is probably also going to be WAL logged in its entirety, on average, if you have full_page_writes turned on and you are checkpointing too closely together.
Almost all of the updated rows are probably going to need to update all of the indexes for it, as well. That is because the new version of the row won't fit on the same page as the old version, so the indexes have to know where to find the new version.
So you are logging the entire table twice (once for the old rows, once for the new ones) and all if its indexes as well. And WAL records have quite a bit of overhead. And if you have full_page_writes turned on and checkpoint frequently, that will make it even worse.
So what are your options to reduce the volume?
1) If many of your updates are degenerate (updated to the value they already have) you can suppress those updates with an additional where clause:
WITH table2_only_names AS (
SELECT id , name FROM table2
)
UPDATE table1
SET table2_name = table2_only_names.name
FROM table2_only_names
WHERE table1.table2_id = table2_only_names.id
AND table2_name is distinct from table2_only_names.name;
2) Most WAL files are extremely compressible. You can include a compresssion command in your archive_command, something like
archive_command = 'set -C -o pipefail; xz -2 -c %p > /backup/wal/%f.xz'
Of course you will have to make your recovery_command do the reverse.
3) Since you are using 9.5, you can try turning wal_compression on.
4) You could try turning off full_page_writes, although this does but your data at risk of corruption in the case of a crash, on most storage hardware. Or, if you have frequent checkpoints during this operation you could make checkpoints occur much less frequently, which will lessen the impact of having full_page_writes turned on.
Best Answer
PostgreSQL tables are stored in 8kB blocks. A new row will be added to the first best block that has enough free space. Only if there is no block with free space, the table will be extended.
Keep inserting more rows, and eventually the table will grow.