I'm having the same problem on RHEL6.8 with an 800 megabyte /tmp/orbit-walker
directory (walker is my username). This prevented my system from booting.
I found the problem by adding init=/bin/bash
to my boot line to get a prompt, then doing
openvt -- /bin/bash
and then opening the virtual terminal with alt-f1. Searching around with ps
, I found the rm -rf /tmp/orbit-*
process that was hanging.
I was kill the hung rm
and then continue the boot process with
exec /sbin/init
It is very difficult to delete the contents of a directory with a million files. Both 'find' and 'rm' insist on reading all the filenames and sorting them. They both run for hours and then abort with 8G of core saying "too many files".
Here's something that works.
(cd /tmp/orbit-walker; /bin/ls -1 -f | xargs /bin/rm)
The -f option makes /bin/ls print without sorting and the -1 does it a line at a time.
I would say it is not safe in general. On many systems, /tmp
is cleaned on reboot by default. See /etc/default/rcS
(TMPTIME
defaults to 0
),
# delete files in /tmp during boot older than x days.
# '0' means always, -1 or 'infinite' disables the feature
#TMPTIME=0
Best Answer
In general, no.
If it's filling up with junk, you may want to look at what software isn't cleaning up after itself.
You can also use find to identify files which haven't been modified or accessed in a long time that are probably safe to delete.