To return space to the OS, use VACUUM FULL
. While being at it, I suppose you run VACUUM FULL ANALYZE
. I quote the manual:
FULL
Selects "full" vacuum, which can reclaim more space, but takes much
longer and exclusively locks the table. This method also requires
extra disk space, since it writes a new copy of the table and doesn't
release the old copy until the operation is complete. Usually this
should only be used when a significant amount of space needs to be
reclaimed from within the table.
Bold emphasis mine.
CLUSTER
achieves that, too, as a collateral effect.
Plain VACUUM
does not normally achieve your goal ("one or more pages at the end of a table entirely free"). It does not reorder rows and only prunes empty pages from the physical end of the file when the opportunity arises - like your quote from the manual instructs.
You can get empty pages at the end of the physical file when you INSERT
a batch of rows and DELETE
them before other tuples get appended. Or it can happen by coincidence if enough rows are deleted.
There are also special settings that might prevent VACUUM FULL
from reclaiming space. See:
Prepare empty pages at the end of a table for testing
The system column ctid
represents the physical position of a row. You need to understand that column:
We can work with that and prepare a table by deleting all rows from the last page:
DELETE FROM tbl t
USING (
SELECT (split_part(ctid::text, ',', 1) || ',0)')::tid AS min_tid
, (split_part(ctid::text, ',', 1) || ',65535)')::tid AS max_tid
FROM tbl
ORDER BY ctid DESC
LIMIT 1
) d
WHERE t.ctid BETWEEN d.min_tid AND d.max_tid;
Now, the last page is empty. This ignores concurrent writes. Either you are the only one writing to that table or you need to to take a write lock to avoid interference.
The query is optimized to identify qualifying rows quickly. The second number of a tid
is the tuple index stored as unsigned int2
, and 65535
is the maximum for that type (2^16 - 1
), so that's the safe upper bound.
SQL Fiddle (reusing a simple table from a different case.)
Tools to measure row / table size:
Disk full
You need wiggle room on disk for any of these operations. There is also the community tool pg_repack
as replacement for VACUUM FULL
/ CLUSTER
. It avoids exclusive locks but needs free space to work with as well. The manual:
Requires free disk space twice as large as the target table(s) and indexes.
As a last resort, you can run a dump/restore cycle. That removes all bloat from tables and indexes, too. Closely related question:
The answer over there is pretty radical. If your situation allows for it (no foreign keys or other references preventing row deletions), and no concurrent access to the table), you can just:
Dump the table to disk connecting from a remote computer with plenty of disk space (-a
for --data-only
):
From remote shell, dump table data:
pg_dump -h <host_name> -p <port> -t mytbl -a mydb > db_mytbl.sql
In a pg session, TRUNCATE
the table:
-- drop all indexes and constraints here for best performance
TRUNCATE mytbl;
From remote shell, restore to same table:
psql -h <host_name> -p <port> mydb -f db_mytbl.sql
-- recreate all indexes and constraints here
It is now free of any dead rows or bloat.
But maybe you can have that simpler?
Can you make enough space on disk by deleting (moving) unrelated files?
Can you VACUUM FULL
smaller tables first, one by one, thereby freeing up enough disk space?
Can you run REINDEX TABLE
or REINDEX INDEX
to free disk space from bloated indexes?
Whatever you do, don't be rash. If in doubt, backup everything to a secure location first.
when a command fails like this, are the results of the command not immediately discarded?
You need to understand the difference between the logical and physical structures of the database. When you insert 350m rows to a table and it fails, the logical changes are rolled back.
But when the underlying datafiles have to be increased in size to accomodate the 350m rows, this physical change is not rolled back.
Why when no actual changes have been made to the table, is the tablespace still full?
While your 350m rows were being added, the tablespace grew its datafiles to accomodate the new data. When the data was deleted, the space remains. Deletions will not shrink the datafile, it will merely mark the space as unused and leave it there to be reused later. The space will only be returned to the operating system if the DBA issues a VACUUM FULL
command.
And finally, what is the best way to free up space again when postgres will not allow to me execute any other commands e.g. VACUUM?
Multiple valid strategies for fixing this problem are provided in these two older questions:
I need to run VACUUM FULL with no available disk space
VACUUM returning disk space to operating system
Best Answer
Firstly, I would suggest using pgstattuple to obtain tuple-level statistics.
For example:
Secondly, if you are in production environment, I would suggest using pg_repack to reclaim disk without locking your table.
For instance: