alias
without parameter outputs the definitions of currently defined aliases.
declare -f
outputs the definitions of currently defined functions.
export -p
outputs the definitions of currently defined variables.
All those commands output definitions ready to be reused, you can redirect their outputs directly to a new ~/.bashrc
.
All lists will contain a lot of elements defined elsewhere, for example /etc/profile
and /etc/bash_completion
. So you will have to clean up the list manually.
MacOS is a Unix OS and rm
means "good-bye". The GUI interface allows you to move a file to the trash (which you can then recover) but that's not what you did. If you have a backup (e.g. you have Time Machine running) then you are saved.
Clarification
Strictly speaking (as @ire_and_curses points out) a rm
simply deletes the directory entry for the file while leaving the disk blocks it used, untouched. If you could quiesce the filesystem in which the file had been, there are advanced methods by which you can try to re-discover those blocks contents. There are also some recovery tools which can be purchased to recover the loss. The central issue is that nothing else re-uses any of the disk blocks represented by your file.
The MacOS also has a secure remove command (srm
) which over-writes a file before it is unlink
ed making it unrecoverable. I use the unlink
term since this is the underlying system call associated with a shell's rm
command. This sets the stage for the next part of this discussion, below.
Sidenote
[ I should hasten to add that even if you over-write a disk multiple times, there are ways to read what was written a dozen or more times before. To properly sanitize a disk for disposal really requires an acid bath, a big hammer and a shredder. ]
unlink
ing a file decrements the file's inode link-count. If this value reaches zero, the file is deleted from the filesystem directory and its disk blocks freed for re-use. This only happens when no processes have the file open. It is often confusing to administrators to find that a filesystem is utilizing very large amounts of space that can't be accounted for by the simple summation of disk blocks (with something like du
). Most often the reason is that an open file has been removed, so that it is no longer represented in its directory. The reason is that the disk blocks remain inuse until the last process using the file terminates.
Opening a file and immediately unlink
ing it is actually a common practice for creating secure, temporary files. Tools like lsof
can expose these otherwise invisible files if you look for files with a link count (NLINK) of zero.
In Unix and Linux (of which the MacOS is a branded Unix), an rm
follows the Unix philosophy of "do-it" without fanfare if it can. That is, if you have the permissions to remove a file (i.e. your directory allows writing) then rm
does just what you ask. You might like to create a shell alias rm='rm -i'
that prompts you for confirmation before performing the operation. Using the -f
switch with rm
overrides that if necessary. An aliased rm
is most useful when you do glob
removes like rm *.log
. That is, you have the option of skipping a file in the list.
Best Answer
The answer is "Probably yes, but it depends on the filesystem type, and timing."
None of those three examples will overwrite the physical data blocks of old_file or existing_file, except by chance.
mv new_file old_file
. This will unlink old_file. If there are additional hard links to old_file, the blocks will remain unchanged in those remaining links. Otherwise, the blocks will generally (it depends on the filesystem type) be placed on a free list. Then, if themv
requires copying (a opposed to just moving directory entries), new blocks will be allocated asmv
writes.These newly-allocated blocks may or may not be the same ones that were just freed. On filesystems like UFS, blocks are allocated, if possible, from the same cylinder group as the directory the file was created in. So there's a chance that unlinking a file from a directory and creating a file in that same directory will re-use (and overwrite) some of the same blocks that were just freed. This is why the standard advice to people who accidentally remove a file is to not write any new data to files in their directory tree (and preferably not to the entire filesystem) until someone can attempt file recovery.
cp new_file old_file
will do the following (you can usestrace
to see the system calls):The O_TRUNC flag will cause all the data blocks to be freed, just like
mv
did above. And as above, they will generally be added to a free list, and may or may not get reused by the subsequent writes done by thecp
command.vi existing_file
. Ifvi
is actuallyvim
, the:x
command does the following:So it doesn't even remove the old data; the data is preserved in a backup file.
On FreeBSD,
vi
doesopen("existing_file",O_WRONLY|O_CREAT|O_TRUNC, 0664)
, which will have the same semantics ascp
, above.You can recover some or all of the data without special programs; all you need is
grep
anddd
, and access to the raw device.For small text files, the single
grep
command in the answer from @Steven D in the question you linked to is the easiest way:But for larger files that may be in multiple non-contiguous blocks, I do this:
which will give you the offset in bytes of the matching line. Follow this with a series of
dd
commands, starting withYou'd also want to read some blocks before and after that block. On UFS, file blocks are usually 8KB and are usually allocated fairly contiguously, a single file's blocks being interleaved alternately with 8KB blocks from other files or free space. The tail of a file on UFS is up to 7 1KB fragments, which may or may not be contiguous.
Of course, on file systems that compress or encrypt data, recovery might not be this straightforward.
There are actually very few utilities in Unix that will overwrite an existing file's data blocks. One that comes to mind is
dd conv=notrunc
. Another isshred
.