Ubuntu 11.10 Oneiric does not ship with cryptsetup 1.4, although Precise does. I don't know whether cryptsetup can be upgraded on Oneiric or not. Since Precise will be released in a month, you can also wait for that release before considering TRIM with encrypted partitions. The kernel can always be upgraded afterwards.
From http://code.google.com/p/cryptsetup/wiki/Cryptsetup140:
Support --allow-discards option to allow discards/TRIM requests.
Since kernel 3.1, dm-crypt devices optionally (not by default) support block discards (TRIM) commands.
If you want to enable this operation, you have to enable it manually on every activation using --allow-discards
cryptsetup luksOpen --allow-discards /dev/sdb test_disk
WARNING: There are several security consequences, please read at least
http://asalor.blogspot.com/2011/08/trim-dm-crypt-problems.html
before you enable it.
As you can see, this feature is not enabled by default because of the degraded security as mentioned in the linked blog. So, if you use cryptsetup on kernel 3.0 (the one shipped with Precise), you won't have TRIM support on your encrypted partitions. After
upgrading to 3.1+, you still don't have unless you enable it.
To do so, you have to edit /etc/crypttab
after installation (not sure if it's possible during installation) and add the discard
option. See also crypttab(5).
I know I'm being a Johnny-come-lately to this question but I'd like to see if I can shed some light on this for anyone searching.
First, @ppetraki's answer is excellent.
The short answer to "Can I RAID SSDs and boot from them" is "Yes!". Here are instructions for 14.04. Instructions for RAID configuration on 12.04.x are identical, but this tutorial using 9.10 has pictures. Following here are some important gotchas and details I had to discover the hard way, through personal experience:
I'm running Ubuntu 12.04.5 with the 3.8 kernel on an MD RAID0 configuration and the SSD-friendly Btrfs filesystem. I run fstrim as a weekly cron.
My extra Btrfs mount options from fstab:
defaults,ssd,ssd_spread,space_cache,compress=no,noatime
The 3.8 kernel is required if you want to use compress=no
as a Btrfs mount option and may also be required for use of fstrim
, the manual trim command used for scheduled trim.
You must also manually align partitions (on any multi-partition setup, raid or not) on the SSDs BEFORE booting to the installer because depending on the page size of your SSD, only the first partition will be properly aligned (it took me a while to catch it) and this can severely impact drive lifespan. You can do this from a command prompt within the installer or from a live usb/disc before you attempt installation. Caveat: Do the math yourself. Fdisk will lie about alignment.
Further reading: I think Btrfs can even create its own raid arrays.
Regarding TRIM:
It's arguably unnecessary thanks to overprovisioning
14.04 is the first release to enable TRIM support out-of-the-box but it's trivial to enable on previous distributions, provided you are using kernel 2.6.33+.
Depending on your chosen filesystem, you can enable trim/discard by editing your fstab and setting the appropriate mount option. The difference between doing this and running it via cron is that the first will trim/discard on-the-fly and the second will do it in a lump on a schedule. I use the second.
Does it matter? Supposedly, the online discard (using the mount option) is not wonderfully implemented and is slow so it's "not recommended". I can tell you that my "hdd" (hehe) lights go nuts for 10-20 minutes when the weekly cron job runs but the OS responsiveness is almost completely unaffected.
Booting from the array
Though I don't see this in a quick scan of the ubuntu 14.04 instructions, I had to create an additional primary partition that is NOT part of my raid arrays. Disk 0 has a 500mb primary partition of ext3fs. During installation I told the installer this was to be mounted at "/boot" and I set the bootable flag. The bootloader is then installed here so the OS can start and then mount the RAID. The remaining Disk 0 space is divided between 2 partitions which are later used for the MD arrays that become "/" and "/swap". Disk 1 has the same, but no boot partition. Also, I only created the swap in case I need it sometime and btrfs does not support swapfiles. This partition is never mounted; after installation, I commented it out in my fstab.
Forgive all the edits, just trying to get it all out there.
Best Answer
After snooping around and readings on the matter of TRIM and fragmentation, answering my own question might be of help to others.
Reading about TRIM, frequent references to file fragmentation are made. Both aspects are the source of legitimate questions about SSD storage performance, however the 2 issues are distinct.
During Wops, SSDs behave quite differently from HDDs. An HDD will never need erasing of a block prior to a Wop. An SSD always does and that is time consuming. TRIM helps alleviate that time consumption by pre-conditioning recently freed blocks on the TRIM-capable SSD, essentially by pre-erasing blocks that are freed after a file was modified and moved to a different area of the volume. This is a simplified view of reality, but one that the non-expert user can roughly rely on to begin to make decision about hardware and low level hardware administration. Read on...
Is TRIM related to SSD's fragmentation ?
- Short answer: No, they are not related.
- Long answer: Fragmentation is related to wear-leveling (WL), yet another process that optimizes the life span of SSDs. WL is essential to homogenize Wops over the whole SSD's free/available/unreserved block space inside a volume/partition. It does so, because each Wop makes the corresponding SSD's cells age, through the application of a relatively large voltage over a tiny area of semiconductor layer, thereby reducing its life span. (I believe, this has to do with thermal induced defects introduced in the bulk of the SSD, but that's off-topic.)
If Wops were managed on SSDs as they are on HDDs, certain areas of the storage medium would wear out long before others, leading to non-operable blocks, loss of capacity , loss of data and errors. WL actually ensures that all blocks in any given SSD's partition are subjected to the same amount of Wops and that the wear is "leveled out" over the entire SSD's available partition space. In that sense, it effectively increases the life span of the SSD, while maintaining its full capacity until its demise.
There are two WL modes: static and dynamic. This wiki (in German) specifies that write-cycle counts at SSD's end of life may increase 100 fold for static and 25 fold for dynamic WL as compared to the same hardware with WL turned off.
As WL physically distributes Wops (the limiting parameter that defines the life span of SSDs) as uniformly as possible over the entire SSD's storage space inside a partition, it will inevitably contribute to its data fragmentation. It does so, in order to achieve its prime objective of optimized distribution of written blocks throughout any given SSD's partition. The uptake is that any file stored on an SSD may be fragmented quite a bit more than it would on a traditional HDD. Fragmentation however does not translate in any performance decline for the SSD.
How WL operates has other corollaries: the bigger the SSD volume, the greater its life span for given conditions of use. For the user, "conditions of use" mean primarily:
- the amount of used space on an SSD's partition and
- Wop's frequency, i.e. how heavily write-accessed is the storage medium.
This actually may speak in favor of :
- placing the lx swap, /home, /tmp and /var on an HDD, while the rest of the OS may live on happily on a smaller SSD.
- not making any partitions with a lot of Wops too small on an SSD. For instance if swap must be on an SSD and you read that the lx-swap is best set at twice your DRAM size, make it 4 times. I don't know whether my arythmetic is right, but the general idea is that this will also more or less double your swap space life-span. If you conduct a lot of operation that require swapping (servers with heavy usage DB, etc.), think about moving your swap and /tmp onto an HDD, unless you like the idea of taking a gas burner to your SSD of course.
Meanwhile TRIM prepares SSD's blocks for any upcoming new Wop. It pre-conditions once written cells for a new Wop by erasing those cells (actually the operation occurs at the level of a block) and starting garbage collection when needed. In that sense TRIM keeps an eye on the distribution map of newly freed (and at least once used) blocks as they are being low-level managed by the WL controller.
Conclusion:
HTH.