By default, ext2 and its successors reserve 5% of the filesystem for use by the root user. This reduces fragmentation, and makes it less likely that the administrator or any root-owned daemons will be left with no space to work in.
These reserved blocks prevent programs not running as root from filling your disk.
Whether these considerations justify the loss of capacity depends on what the filesystem is used for.
The 5% amount was set in the 1980s when disks were much smaller, but was just left as-is. Nowadays 1% is probably enough for system stability.
The reservation can be changed using the -m
option of the tune2fs
command:
tune2fs -m 0 /dev/sda1
This will set the reserved blocks percentage to 0% (0 blocks).
To get the current value (among others), use the command :
tune2fs -l <device>
Conclusions This Wikipedia page on NTFS seems to show that the first 500 bytes or so is the NTFS Boot Sector and therein is data which describes the rest of the NTFS file system. Since EaseUS is able to figure out the file structure which is still physically on there, I think much of the NTFS data is still physically there too (i.e. data about the file structure within NTFS, not just the files themselves). Is that a correct assumption?
Yes, except the boot sector only has a pointer to the $MFT
file, and that file describes everything else about the file system. Practically all NTFS metadata is stored in the form of invisible files with $ names (you can find a table on the same article).
More Observation I ran ddrescue count=100 ...
The description is very odd, because although you say you used 'ddrescue' the rest of your command (and its results) look very much like you actually used plain 'dd'. Despite the similar names those are actually very different tools.
The 'ddrescue' command has a different syntax from 'dd' (and works differently overall – it is designed to copy from disks which have many bad sectors). As in Attie's example, you should have used:
ddrescue /dev/sda ~/mydisk.img ~/mydisk.map --size=2G
(The map file allows you to stop and resume a copy, as well as use ddrescueview
to graphically see which disk areas are bad and couldn't be copied.)
When I ran ddrescue count=100 if=/dev/sda =of=~/myDisk.img conv=noerror skip=2G
then ddrescue told me that "skip" was an invalid argument. I think it was trying to say that "2G" was an invalid value for the skip argument but I don't know why that's the case on a 1TB drive.
In dd (not ddrescue!), parameters such as 'count' and 'skip' always take a number of blocks, not bytes. So "2G" does not mean two gigabytes, it means 2147483648 blocks of 512 bytes (or whatever custom size was specified using [i]bs=
).
Additionally, dd uses binary size units (where K=1024) and manufacturers sell their disks using decimal units (where k=1000).
With the default 512-byte block size, "skip=2G" actually means exactly 1 TiB (1099511627776 bytes), and that's more than your disk – most HDDs are only 1 TB (1000000000000 bytes or just a bit over 931 GiB).
Final Question Have I misunderstood something basic about NTFS or is the NTFS Boot Sector really empty? If the Boot Sector really is empty, how is a tool like EaseUS able to rebuild the file structure?
The NTFS boot sector only stores a few very basic parameters about the filesystem, such as cluster size, or the start location of the $MFT file – a recovery tool could easily guess them as there are only a few typical cluster sizes, and $MFT is almost always placed at the same location.
(The majority of the boot sector's data has nothing to do with the file system itself, it really only stores boot code which is used to start the OS. Many file systems reserve the first few sectors for this purpose, due to the way PC BIOS boot process works.
On BIOS systems only one partition – the hidden "system" partition in Vista or later, or the C: partition in XP or older – needs to have a working boot sector. On UEFI systems the boot mechanism is different and boot sectors aren't used for anything at all.)
Best Answer
While I'd love to see something like ZFS available for Windows hosts, NTFS isn't a horrible filesystem. It supports most "modern" filesystem features (extended attributes, journaling, ACLs, you name it), but it's hampered by Explorer and most other apps not supporting any of these.
One thing that will absolutely kill its performance is having "too many" entries in a directory. Once you pass a couple thousand entries in one directory, everything slows to a crawl. Literally the entire machine will halt waiting for NTFS to create or remove entries when this is happening.
I used to work with an app that generated HTML-based docs for .NET assemblies; it would create one file per property, method, class, namespace, etc. For larger assemblies we'd see 20+k files, all nicely dumped into a single directory. The machine would spend a couple of hours during the build blocked on NTFS.
In theory, Windows supports filesystem plugins, which would make native ZFS, ext3 or whatever (even FUSE) possible. In practice, the APIs are undocumented, so you're completely on your own.
Now, since you're doing Java development, could you install a different OS on your machine, or use a VM on top of Windows?
Also, you might want to try some platform-independent filesystem benchmarks (iozone, bonnie... there are probably more modern ones I don't know off the top of my head, maybe even a few written in Java) to see if it's actually the filesystem holding you back, or if it's something else. Premature optimization and all that...