Since you don't have enough space to run a vacumm or rebuild, you can always rebuild your postgresql databases by restoring them. Restoring the databases, tables, indexes will free up space and defragment. Afterwards, you can setup automated maintenance to vacumm your databases on a regular basis.
1 Backup all of the databases on your postgresql server
You will want to backup all of your databases to a partition that has enough space. If you were on Linux, you can use gzip to further compress the backup to save space
su - postgres
pg_dumpall | gzip -9 > /some/partition/all.dbs.out.gz
2 Backup your configuration files
cp /path/to/postgresql/data_directory/*.conf /some/partition/
3 Stop Postgresql
pg_ctl -D /path/to/postgresql/data_directory stop
4 erase the contents of the data directory
rm -Rf /path/to/postgresql/data_directory/*
5 Run initdb to reinitalize your data directory
initdb -D /path/to/postgresql/data_directory
6 Restore configuration files
cp /some/partition/*.conf /path/to/postgresql/data_directory/*.conf
7 Start Postgresql
pg_ctl -D /path/to/postgresql/data_directory start
8 Restore the dump of all the databases you made
gunzip /some/partition/all.dbs.out.gz
psql -f /some/partition/all.dbs.out
Although you did not state it, I assume from your references to documents that you have followed that you have done a VACUUM FULL on the database and/or affected tables. You also didn't specify what postgresql version you are using - I will assume it is > 9.0 (VACUUM FULL behaved differently before this).
VACUUM FULL will rewrite affected tables into new files, then delete the old files. However, if any process still has the old file open, the operating system will not actually delete the file - until the last process has closed it.
If practical, restarting the database would ensure that all open files get closed.
If that is not practical, then you may be able to verify if this is your problem, and find out which process has the files open.
If using Linux (or most other Unix-like systems) you can use the 'lsof' command to get a listing of all files open in all processes. Files which are open but which have since been deleted will have '(deleted)' appended to the filename. So, you can grep the output of lsof, looking for deleted files, like this:
sudo lsof -u postgres | grep 'deleted'
If that identifies processes which still have the old files open, you can use pg_terminate_backend to terminate that process:
SELECT pg_terminate_backend(xxx);
where xxx is the PID of the process, found in the lsof output.
If using Windows, the same principle could apply, because postgres opens files using the FILE_SHARE_DELETE flag, which allows it to delete files that are open in another process. The 'handle' command is the rough equivalent of lsof, although I am not sure if you can tell if the files are deleted or not so some additional work might be required.
It is another question as to why any such processes would be hanging on to old file handles. However in the thread you quoted in your question, Tom Lane seems to imply that it can happen.
Best Answer
Because PostgreSQL must write WAL before making any changes to tables, it needs free disk space in order to delete things and release more disk space.
If you let the disk fill up, you can't recover from within PostgreSQL. Even
TRUNCATE
still has to write to WAL.So you must free some space on the volume, or expand the volume. If your PostgreSQL log files are in
pg_log
in the data directory, you can safely remove some of those and restart Pg.Do not delete
pg_xlog
orpg_clog
. These are not server error logs, they're critical parts of the database, the transaction log and commit log.