The problems you're having sound far more extensive than what I'd expect from mere loss of power (even during fairly heavy write activity) on a device. I have to wonder if you're really having more problems at the interface/driver level, or a corrupted partition table or something of that sort.
From the sounds of things you may have exacerbated the problem further with all the thrashing around you've done while trying to fix the issue.
I don't know if we can help with this case but don't give up yet.
For the future I'd suggest that you learn the following technique:
When you have trouble with a drive under Linux or UNIX you can usually use dd
to make a bit-image copy of the whole device to some other location. Find a drive that's at least as large as the one in question and try a command like: dd if=$PROBLEMATIC of=$TARGET bs=4M
... be very careful about the if (input file) and of (output file) directives. Leave that run. It's a good idea to run tail -f /var/log/messages &
(or possible variant as appropriate to your /etc/syslog.conf) ... either do that in the background or in another window. There are enhanced versions of dd
which can handle retries and continuing past bad blocks more robustly (sdd
is a name that comes to mind). But try just using the stock GNU dd
command at first.
You can make such a copy of the whole device (/dev/sdd, for example) or just the partition (/dev/sdd1). If you get "short read or similar errors then it suggests that either the device has physical errors preventing reads past certain cylinders or, in the case of a partition, that the partition table is mangled in some way. You can even make two different dd
images ... one of each.
Here's the trick: do all your fsck
and mount
attempts, and use your various other recovery tools such as TCT (The Coroner's Toolkit) on the copied image!
This minimizes the time spent running the drive (which is possibly degrading at the hardware level as you operate it) and minimizes the impact of failed and possibly misguided recovery attempts. (In some situations you make one image, then another based on that and always operate on the tertiary image ... depends on how much the data is worth).
I personally suggest that you run something like hexdump
or strings
to read through the image ... just let it scroll past for a long time and look for plain text that looks like it might be fragments of your data. I have used grep
to recover useful (textual) data from otherwise completely mangled filesystems. In case I'm not suggesting it as data recovery heroics ... but as a sanity check. If you scroll through 10s of megabytes or a few gigabytes of data and don't see any recognizable text ... then you probably have a hopeless case or you've done something very wrong (were you really careful about those if= and of= options?).
I don't know if any of this will help you with the current effort. But learn these tricks now and they will definitely make your next foray into data recovery much less scary. (Yes, practice on a healthy system once or twice --- go use a hex editor and try adding your own creative corruption here and there --- to the COPY of course! Then try fix it).
Oh, and this is a really good time to review your backup and data recovery plans and procedures (or provide better advice to your customer/colleague/client/friend/whatever).
There is no "right" answer to such a limited question. If your setting up a desktop or laptop, just do:
mkfs.ext3 /dev/sda1
The program can choose way better than an inexperienced human being. If your going to store a huge website, ext3 might require up to 25% inodes, because of the massive possible number of small individual files.
But you're really only telling the mkfs
program "up to 25% inodes." Most file system creation settings are used in specialized applications, such as RAID striping, where the geometry of the file system has to be tuned or it will be incredibly slow.
I know people have their pet file system settings. I have yet to find anyone who can demonstrate the usefulness of departing from default values on a 40 GB desktop or laptop partition.
And ext3 is not the best file system for everything! I use three different partition format types on my lappy, because certain partitions hold many small, individual files. Others hold a much smaller number of larger files (good for ext3 system), like /home partition.
/usr
can have 500,000 individual files, making ext3 klunky, but reiserfs flies. The other thing is acls. You must inform mkfs.ext3
that you'll be using acls, if you'll be using them. acls are fine tuning for permissions, usually not important on a single user system. But if you have a group of 20 normal users, and you want differing access controls for some of them, you must use acls.
I personally like xfs for a general use filesystem, although it doesn't work out of the box with SELinux. But it's the most sophisticated and efficient file system. It's used on several brands of HPCs. You can google it if you so please. ext3 uselessly wastes 8% of the partition. That's outrageous.
But ext4 is better. I'm not going to spit out a command line for you. You must determine the use of the partition, and design the file system accordingly. ext3 is reliable, but so is a tank. That doesn't mean you want to drive one around town.
I hope this helps a little.
Best Answer
In order to increase the number of Inode in ext3 filesystem we need to remake the filesystem using
mke2fs
with the-i
option we can set the byte-per-inode ratio which will affect the number of inodes. Otherwise the-N
option allows to specify the exact number of inodes.