rootfs
mounted on /
is an in-memory filesystem which typically only contains the tools needed to mount the “real” root filesystem and is emptied after this is done. The initial content of the rootfs are loaded from an initramfs image stored inside or next to the kernel binary and loaded by the bootloader.
The root filesystem on flash is ubi0:root
. This is a three-layer system:
- At the top, the UBIFS filesystem.
- In the middle, the UBI volume which provides wear leveling on top of raw flash.
- At the bottom, the raw flash interface (MTD).
(Take the rest of this answer with caution, I've never actually worked with UBI.)
The ubi0:root
volume is created by arguments to the ubi
module or the ubiattach
utility. This is not a block device, because the interface between the UBI level and the filesystem on top of it is more complex than “write this byte at this location”. You can create a read-only block device on top of UBI with the ubiblock
command, then back that up — something like
ubiblock --create /dev/ubi0_0
cat /dev/ubi0_0 >backup
ubiblock --remove /dev/ubi0_0
The same word can have multiple meanings depending on who's doing the talking and what they're talking about. In general, a distributed filesystem is like CIFS or NFS, where the nodes housing the actual files can be served from multiple nodes. With CIFS this is done via DFS (literally "Distributed File System" where clients get referrals to which server houses the requested file/folder) and with NFS this is done via pNFS ("Parallel NFS" which is more about removing performance bottlenecks by enabling parallel I/O).
A clustered filesystem is one where the filesystem metadata is structured to allow multiple nodes to have concurrent access to the same block device. Usually this involves each node that mounts the filesystem to have its own journal and implementing filesystem locks that are transmitted via the HA cluster's heartbeat network.
Best Answer
I'm using AFS, NFSv3, NFSv4, and CIFS currently. CIFS is primarily for supporting Windows clients and I find it less suitable for UNIX/Linux clients since it requires a separate mount and connection for each user accessing the share. Users can share the same mount point, but they will be seen as the same user on the server-side of the connection.
NFSv3 is primarily used by directories being exported to other UNIX/Linux servers since it's stable and simple to deal with. With both AFS and NFSv4 I am using Kerberos. Using NFSv4 on Ubuntu 8.04 and older I found it a bit unstable, but it has steadily improved and I have no stability issues with 10.04+. It does appear to be a performance bottleneck to use sec=krb5p so I tend to use sec=krb5i or sec=krb5.
One issue I have is how Kerberos tickets are handled with Linux's NFSv4 layer. A daemon periodically scans /tmp for files beginning with krb5cc_ and matches the ticket up with the file owner. If a user has more than one ticket they own under /tmp, it will use whichever ticket file is found first when scanning. I've accidentally changed my identity when temporarily acquiring a ticket for other purposes. AFS stores tickets in Kernel-space and are associated with a login session normally. I can login twice as the same Linux user, but still use different AFS credentials on each login without interference. I also have to explicitly load credentials into the kernel which normally happens automatically during login. I can safely switch tickets in userspace without interfering with file permissions.
Overall, I like many of the ideas of AFS better than NFSv3/4, but it does have quite a bit smaller of a community developing it compared to NFS and CIFS. It's also properly known as OpenAFS, AFS is the name of IBM's closed-source offering. A big difference between AFS and NFS is that AFS is more consistent in it's network protocol and support. AFS does provide locking in-band instead of using a side-band protocol like NFSv3. It also offers a more sophisticated ACL system in-between POSIX ACLs and NFSv4/NTFS/CIFS ACLs. This, unlike the POSIX ACL addition to NFSv3, is a standard part of it's protocol and both Windows and UNIX/Linux clients can access and modify them. It also doesn't suffer from the 16 group limit that many NFSv3 servers have. This makes AFS appear more consistent in my mind across Windows and UNIX systems. Also, since AFS is only accessible via it's network protocol, there aren't issues where the actual underlying filesystem behaves slightly differently from the exported view of it. For example, in Linux, a file may have MAC or SELinux labels controlling access or other extended attributes that aren't visible over NFS. AFS, on the other hand just doesn't have extended attributes.