I made it :-)
I basically followed Gilles's advice and decided to do it properly: i.e. manage a complete cross-compilation of GLIBC. I started from crosstool-ng, and was initially disappointed - seeing that it didn't support my old kernel. I kept at it, though - manually editing the configuration file saved by crosstool-ng to do changes like these on the default arm-gnueabi build configuration:
$ ct-ng arm-unknown-linux-gnueabi
$ ct-ng menuconfig
...
$ vi .config
$ cat .config
...
CT_KERNEL_VERSION="2.6.17"
CT_KERNEL_V_2_6_17=y
CT_LIBC_VERSION="2.13"
CT_LIBC_GLIBC_V_2_13=y
CT_LIBC_GLIBC_MIN_KERNEL_VERSION="2.6.9"
CT_LIBC_GLIBC_MIN_KERNEL="2.6.9
...
$ ct-ng +libc
After numerous tests and failed attempts, the above changes did it - I got a compiled version of GLIBC that would work with my kernel, and copied the resulting files to my Debian Lenny ARM machine:
$ cd .build/arm-unknown-linux-gnueabi/build/build-libc-final/
$ tar zcpf newlibc.tgz $(find . -type f -iname \*.so)
$ scp newlibc.tgz root@mybook:.
I went all the way and moved past squeeze: I debootstrapped a /wheezy and then - very carefully - overwrote the GLIBC versions of the armel-debootstrapped /wheezy
with my own:
# # In the ARM machine
# cd /wheezy/lib/arm-linux-gnueabi/
# mv /var/tmp/ohMyGod/libc.so libc-2.13.so
# mv /var/tmp/ohMyGod/rt/librt.so librt-2.13.so
...
...etc, making sure I didn't miss any shared libraries.
Finally, I copied over the ldd
and ldconfig
binaries (which were also part of GLIBC), and chrooted inside my /wheezy.
It worked.
I can only assume that the compilation of GLIBC from a chroot-ed 'qemu-arm' emulation inside a x86, somehow messed things up - maybe the configure
process detects some stuff from the running environment - whereas the cross-compilation can't be misled.
So naturally I moved to the next step, and used a busybox-static shell to replace the {/bin,/sbin,...} folders of my old lenny with the wheezy ones - and rebooted into my brand new Wheezy :-)
I hereby claim that my WD MyBook World Edition is the only one on the planet running Debian Wheezy :-) If anyone else is interested, I can upload a tarball of the libc files someplace.
Well, first, what is an inode? In the Unix world, an inode is a kind of file entry. A filename in a directory is just a label (a link!) to an inode. An inode can be referenced in multiple locations (hardlinks!).
-i bytes-per-inode (aka inode_ratio)
For some unknown reason this parameter is sometime documented as bytes-per-inode and sometime as inode_ratio. According to the documentation, this is the bytes/inode ratio. Most human will have a better understanding when stated as either (excuse my english):
- 1 inode for every X bytes of storage (where X is bytes-per-inode).
- lowest average-filesize you can fit.
The formula (taken from the mke2fs
source code):
inode_count = (blocks_count * blocksize) / inode_ratio
Or even simplified (assuming "partition size" is roughly equivalent to blocks_count * blocksize
, I haven't checked the allocation):
inode_count = (partition_size_in_bytes) / inode_ratio
Note 1: Even if you provide a fixed number of inode at FS creation time (mkfs -N ...
), the value is converted into a ratio, so you can fit more inode as you extend the size of the filesystem.
Note 2: If you tune this ratio, make sure to allocate significantly more inode than what you plan to use... you really don't want to reformat your filesystem.
-I inode-size
This is the number of byte the filesystem will allocate/reserve for each inode the filesystem may have. The space is used to store the attributes of the inode (read Intro to Inodes). In Ext3, the default size was 128. In Ext4, the default size is 256 (to store extra_isize
and provide space for inline extended-attributes). read Linux: Why change inode size?
Note: X bytes of disjkspace is allocated for each allocated inode, whether is free or used, where X=inode-size.
Best Answer
As for the kernel code, besides the architecture specific code, which is a very small portion (1% to 5%?), all the kernel source code is common to all architectures.
About the binaries:
Actually in most Linux distributions,
vmlinuz
is a symbolic link that points to the actual gzipped kernel code; likevmlinuz-3.16.0-4-amd64
. I am sure the OP is talking about the latter, however mentioning the former for the benefit of the reader.https://www.ibm.com/developerworks/community/blogs/mhhaque/entry/anatomy_of_the_initrd_and_vmlinuz
While it is indeed true that ARM code is indeed smaller, even if the kernels were not compressed, the kernel codes in ARM are often custom made, and have far less code activated, than in the Intel counterpart versions (e.g. Intel has a lot of video cards, even if just the modules stubs, while usually the custom ARM kernel only has to deal with the one that is present in the SoC).
In addition comparing already compressed random binary blobs maybe not yield always the true results as per some strange coincidence a bigger binary might become smaller due to some compression optimisation.
So in reality, to compare effectively the binary kernels, you have to compile them with identical options, and keep them uncompressed (or uncompress the resulting
vmlinuzxxx
file).A fair match would be comparing other, non-compressed binaries for instance
/bin/ls
, or/usr/sbin/tcpdump
, and furthermore an architecture similar to the one we are trying to match (ARM machines is still largely 32-bits, however there are already a few 64 bits ones)Needless to say, the same code compiled in ARM will always be (far) smaller because the ARM machine code is a RISC platform code. It has a smaller subset of machine code instructions too, that result in a smaller code. In the other hand, the Intel has a bigger set of instructions, also due to the retro-compatibility inheritance with multiple generations of microprocessors.
From http://www.decryptedtech.com/editorials/intel-vs-arm-risc-against-cisc-all-over-again
Nevertheless, the conversation is not that straight enough, as Intel chips are a complex beast nowadays, and deep down the pseudo-CISC layer they have RISC strategies and designs that decode and emulate the Intel opcodes as we know them.
The ARM opcodes are also bulky, compared say to a MIPS, since the ARM is a cheap processor with specialised instructions dedicated to video decoding (around 30% of the processor die is dedicated to them).
As a short exercise, take the tcpdump binary and the four Linux architectures I have access to:
MIPS 32 bits -> 502.4K
ARM 32 bits -> 718K
Intel 32 bits (i386) -> 983K
Intel 64 bits (x86_64) -> 1.1M
So coming back to your original question: