I don't know why the default LVM setup was chosen, but I can offer some background on how LVM works.
Without LVM, you divide a disk in partitions, and each partition contains a filesystem (/
, /home
, etc.) or something else such as swap space.
LVM introduces a layer of insulation between the disk structures and the content-bearing structures. I'll refer you to the Wikipedia article for a more in-depth presentation, but in a nutshell, each disk partition is an LVM physical volume, while each filesystem or swap area is an LVM logical volume. There's no relationship between the extent of logical and physical volumes: the space in a physical volume can be divided between several logical volumes, and a logical volume can be stored across multiple physical volumes.
This explains why you're seeing two views. At the disk partitioning level, you have a disk with two partitions, one of which happens to be an LVM physical volume. At the content level, you have several filesystems, some of which happen to be on LVM logical volumes.
Parted isn't showing the LVM logical volumes. Either this version of Parted doesn't support LVM (hmm, I thought it did), or you need to tell it to switch to a different view, or you have already created partitions but not logical volumes yet.
I don't know where you're seeing 4MB wasted. I see 17MB unallocated, and I don't know why. Up to 4MB unused could happen with LVM: the size of each logical volume is a multiple of 4MB.
I don't know for sure what the 255MB ext2 partition is for, but I guess it's meant to be /boot
. It used to be that Grub, the default Ubuntu bootloader, couldn't boot from LVM. But Grub 2, the default bootloader for new installation since Ubuntu 9.10, supports an all-LVM installation, so you probably don't need that boot partition. (There are rare cases where an ext2 boot partition is still useful, for example if you're dual-booting with another operating system that doesn't support loading a kernel from LVM or ext4.)
I think I've addressed everything except the amount of swap. (Aside: you shouldn't ask unrelated questions in one question. The amount of swap has nothing to do with your interrogations about LVM. But the topic has already been done to death, so don't ask it separately, just search the site.) Since disk space is cheap, don't hesitate to have ample swap. The machine I'm posting this from has 4GB of RAM and 16GB of swap. Having low or not swap space will absolutely not “force the OS to use faster RAM”, this is completely wrong and I advise treating any source that says this with deep suspicion. (As far as I know this applies to Windows as well.) The OS will use RAM whenever it can. Swap is only used as a last resort, when there isn't enough RAM. Note that on a normal system, you should expect to see some swap in use. That's because RAM isn't just for storing the memory of running processes, it's also for caching disk contents. In fact, it's quite common to have about half the RAM used by the disk cache, and to have some process memory swapped out. If your system didn't do this, it would be running slower, because it would waste more time reloading the same files from disk again and again.
You can use partitioning or not while using btrfs.
With partitioning first create partition tables and partitions on both disks. Lets say /dev/sdb1
and /dev/sdc1
.
Then create btrfs
filesystem by running
sudo mkfs.btrfs -m raid1 -d raid1 /dev/sdb1 /dev/sdc1
You are done.
Note: The previous command creates a btrfs
filesystem on specified partitions. You don't need to "format" it neither before nor after running this command. mkfs.btrfs
is fully sufficient to create a filesystem.
Now you can mount this raid to any directory you like by
sudo mount /dev/sdb1 /mount_directory
You can mount either of disks with the same result: the raid will be mounted.
You can mount it permanently in /etc/fstab
. If you don't set specific options there, like compression, they won't be used.
Alternatively you don't have to create partitions and can directly create raid on devices by:
sudo mkfs.btrfs -m raid1 -d raid1 /dev/sdb /dev/sdc
There is no much difference between both methods, but I prefer to have partitions traditionally.
I am not aware of any settings that will "reduce the consumption of system resources". The kernel module doesn't consume much by itself. If you don't need any features like deduplication or compression simply don't use them.
But take in account that lzo
compression in most cases speeds up the disks and also increase storage space, because decompression is faster than reading from disk.
Answering detailed questions:
btrfs-tools
is installed by default and nothing else is needed to manage btrfs
filesystem.
There is no much difference if you plan to use your disks only for the raid. It doesn't affect performance in any way. But if you create partition tables on disks, you'll be able to shrink your btrfs raid partition one one or both disks and create some other filesystem(s) on part of the disk(s). It adds a bit of flexibility for the future. You can shrink btrfs partitions even when a disk is in use.
I am not aware of any available for a user settings regarding L2 cache usage. You can do a manual or automatic defragmentation. There is autodefrag
mount option available for automatic defragmentation. I didn't notice much RAM usage with autodefrag
, but that's need testing if RAM is a real problem.
You can mount your raid with compress=lzo
option. If you do it on an empty raid, all files will be compressed. You can also enable compression later, but in that case already existing data won't be automatically compressed, only new data will. But you can always do a defragmentation of the existing data with -clzo
option. That will compress the existing data.
I always use lzo
because it adds some performance for HDDs, saves some read/write cycles for SSDs and gives extra space for both.
Example of fstab
entry with autodefrag
and lzo
:
UUID=xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx /mount_point btrfs compress=lzo,autodefrag 0 0
UUID can be taken from sudo blkid
or gparted
.
This way you compress the filesystem with lzo
:
sudo btrfs fi defrag -r -v -clzo /mount_point
Best Answer
If you want filesystem information and not partition/volume information, I think you'll have to use filesystem-specific tools.
In the case of the extN systems, that would be
dumpe2fs
. Anddumpe2fs
doesn't directly print the size in bytes, as far as I can tell. It does, however, print the block count and the size of blocks, so you can parse the output instead:In my case, this size is slightly different from the partition size:
The partition size is 29999983104 bytes, 2560 bytes more than a multiple of the block size, which is why the size reported by
dumpe2fs
is less.