Debian Disk Usage – Disk Space Not Freed Up When Deleting Files

debiandeletedisk-usagestorageveracrypt

After deleting files on a veracrypt-encrypted drive with no disk space left, no disk space is freed up.

I'm trying to sync a hard drive to another one as described here.

That target drive is full and I can't free disk space on it. I tried removing files in all sorts of ways such as running rm /.../file and it's not just the Dolphin file explorer that shows no disk space is free but also for example df -h which shows 0 in the column Avail and 100% in Use%.

How to actually delete the files and free up the disk space or make it show the actually available disk space if it's just displaying that wrong? I'm using Debian with KDE.


Details on how the problem occurred and more things I tried:

The problem was that it finished with errors because it reached "No space left on device" (which shouldn't have occurred either) and Grsync does not backup root-owned files.

I then ran the rsync command of Grsync in the console as also described above. The command is sudo rsync -r -t -p -o -g --delete -l -s /media/veracrypt1 /media/veracrypt2

It had many "skipping non-regular file" errors but that is a separate problem (it should backup 100.0%) and eventually also got a No space left on device error despite that there are many GBs of disk space free on disk1. sudo lsof | grep {diskname} shows disk2 is not used by anything. Of course there is no running rsync process. I rebooted and dismounted and remounted the drive in the meantime several times.

The problem I have that when I delete something on disk2 to free disk space to allow for another sync, it doesn't show that more disk space is available in the Dolphin file browser or when entering lsblk -f or df (the latter still shows 0 in the column Available).

I tried deleting it by right clicking the file in Dolphin and choosing delete. I also tried "Move to Trash" but there is only the backed up Trash folder (I can't exclude it with Grsync) but no Trash folder at the root directory of the drive.

A similar problem occurred on my /home/ partition (different drive) but it was solved after a reboot. The problem described here did not go away after a reboot, dismounting and remounting the drive, and after upgrading to Debian12.

When deleting things, the free space is still/immediately 0 bytes, nevertheless I checked for a large quickly growing file with gdmap but couldn't find anything. After the upgrade it seems like sudo lsof | grep "/media/" (no output even if I have some file open) doesn't show if files on the disk are opened anymore but no file on it should be open.

I already tried things from this question. For example IUse% shows the percentage is quite low. sudo dumpe2fs /media/veracrypt2/ | grep -i reserved just says dumpe2fs: Is a directory while trying to open /media/veracrypt2/ and sudo lsof +L1 | grep media shows no output and neither does sudo lsof | grep deleted | grep media. Nothing should write to or read from that drive.


Unnecessary details to skip:

I thought logging out and back in to restart the session would solve this but it didn't and it still shows 0 bytes of disk space free. rm filename does not free up disk space either. As requested the output of lsof | grep deleted is below; many of those messages showed many times and I left out some columns and replaced Ids:

pulseaudi ... /memfd:pulseaudio (deleted)
ksmserver ... /home/username/.cache/ksycoca5_{lang}_{id} (deleted)
plasmashe ... /home/username/.cache/ksycoca5_{lang}_{id} (deleted)
plasmashe ... /home/username/.cache/appstream/appcache-{id}.mdb (deleted)
vorta ... /usr/share/mime/mime.cache (deleted)
{texteditor} ... /home/username/.cache/ksycoca5_{lang}_{id} (deleted)
konsole ... /home/username/.cache/konsole/#{number} (deleted)

after restarting the session there is an additional .lock file open by a printer program, the two pulseaudi, and the konsole ones but nothing else. I still can't free up disk space if lsblk -f and Dolphin display the correct available space.

I tried lsof +L1 another time and it doesn't show more disk space even when this is all it shows (with one open konsole):

COMMAND       PID     USER   FD   TYPE DEVICE SIZE/OFF NLINK    NODE NAME
konsole    549098 username   20u   REG  254,2        0     0 8257752 /home/username/.cache/konsole/#8257752 (deleted)
konsole    549098 username   21u   REG  254,2        0     0 8257764 /home/username/.cache/konsole/#8257764 (deleted)
konsole    549098 username   22u   REG  254,2        0     0 8259787 /home/username/.cache/konsole/#8259787 (deleted)
konsole    549098 username   23u   REG    0,1  3677184     0 2372415 /memfd:wayland-cursor (deleted)
veracrypt 2340184 username    9u   REG    0,1  3677184     0   69165 /memfd:wayland-cursor (deleted)
xdg-deskt 2340702 username    8u   REG    0,1  3677184     0   69892 /memfd:wayland-cursor (deleted)

I also tried sudo lsof | grep deleted | grep mountname and it does not give any output. mountname is part of the dir since I mounted it there (sudo lsof | grep mountname1 works for files open that are on the other drive and even sudo lsof | grep mountname does not show anything).

This could be a veracrypt bug, issue is here.

Best Answer

I had this scenario run on me several times on different machines. I cannot guarantee you've stumbles on the same conditions I did, but I believe it's worth checking out.

You seem to be having two distinct problems, none really connected with Veracrypt which is only the container for a file system.

  1. The disk filled up, and deleting some files resulted in the disk still being filled up.

  2. The disk should not have filled up.

1. disk full

Case (1) does not depend on deleted files or unaccounted space because you remounted the disk and rebooted, which would have got rid of deleted-open files and forced a fsck should it have been necessary.

1.1 disk full because something keeps writing to it

We are then left with two solutions: one is that something is still writing to the disk, appending to some existing file (it usually happens to me with log files). Imagine a process has a backlog of twenty megabytes that need to be written, it cannot because the disk is full, and you delete fifteen megabytes of files. Your free space is immediately filled up again. To check on that, the only practical way is to run a disk accounting (to a file on a different device), ls -laR might do, delete some files to free space, run the accounting again, and compare the two accountings. You should of course notice that some files on the second have disappeared (the ones you deleted), but also that some other files have appeared or grown in size.

You can do this using lsof to only account open files, which is much faster, but you risk missing files that are created, filled and closed.

You can also do this using du -sk /mnt/disk/*/*/* > /tmp/before.txt to list space in all folders of, say, depth up to 3 from the mount point of the disk. This allows to quickly zero on which folders are increasing in size.

1.2 disk apparently full because of 5% root reservation

The second possibility is that your file manager does not properly account for root reserved space. Most if not all Linux file systems reserve some space (typically 5%) for root operations and to increase some key performances. If a root-instigated operation fills the disk, and a non-root user deletes some files remaining above 95% of occupancy, the non-root user will keep seeing the disk as completely full.

You can check total filesystem and available space using

tune2fs -l /dev/mapper/yourcryptdevicename

and you can change, say, 5 to 1 percent with

tune2fs -m 1 /dev/mapper/yourcryptdevicename

which will make the remaining 4% free space "reappear" to non-root users.

2. Disk should not have filled up

This only applies if you placed, say, 400 GB of files on a 500 GB disk and filled it up. If the difference is below 5%, then it is an effect of cause 1.1 and can be solved that way, no further explanation is necessary.

Otherwise, space on disks is allocated in multiples of the basic block unit, which might be 4K. When this happens, a file 6K in size will require two blocks, and creating it will result in the disk losing 8K of available space. This is sometimes called slack space and is on average equal to half a block size multiplied the number of files you have.

So when I sync my source and library tree from my laptop to a NAS, where the laptop has 2K blocks and the NAS has 4K blocks, there are 49 Gb actually used in 1,797,479 files; but if I run du -sk --apparent-size, I get 44Gb . The 49 Gb is actually thanks to some tricks the ext4 filesystem plays to avoid small files to gobble lots of slack space. On the NAS I do not have those tricks and blocks are double to start with, and those same 44 Gb of files take up 56 Gb. Scaling up to a whole SSD worth, I might have 880 Gb of files that do not fit on a 1 TB SSD drive.

To check whether this might be the case, run

tune2fs -l /dev/yourpartition | grep "Block size:"

There are other information in tune2fs output that might help you check what is happening, by comparing the output on the two disks you're syncing, and also comparing the output on the same disk after deleting some files, for example

Inode count:              365985792
Block count:              1463919616
Free blocks:              72775504
Free inodes:              176130892
Related Question