The native Self Encrypting Drive function is always on. This means that the data on the ssd is always encrypted, however by default it has no password set. It has an internal hash which is accessed using the BIOS password you give when booting up. The motherboard needs to have HDD BIOS lock option for you to enter the password, which most older desktop motherboards don't have, but most laptops (even older ones) have.
RAID function is problematic, wasn't too long ago when Intel made available TRIM passthrough for RAID 1, and more recently RAID 0. Basically eDrive technology would have to be implemented in Intel drivers too, and this is probably very tricky when going for RAID 0. While I'm not that knowledgeable on the subject, it would appear near impossible as with the current implementation.
Windows software RAID 0 is another possibility, and this enables the hard drives to be in AHCI mode allowing passthrough of edrive commands. However I'm not aware of hardware level bitlocker support in software raid mode. You'll have to try it out. Generally speaking speed difference between this type of "hardware" raid and software is negligible. True server-quality RAID implementations is another matter.
In short, use BIOS HD password (usually named this way despite using UEFI, aka ATA-password) if you wish to use Intel RAID, but you will miss out on eDrive. If you want eDrive, try out software raid which may or may not work.
Also, assuming that there is no clear "HD PASSWORD" type of setting in the UEFI BIOS, and BIOS level HD-passwording is not documented, you may want to try placing a general start-up password and extracting the disk and testing it on another computer or using an external USB-enclosure. It should not boot up nor register in windows if it has a password lock, ie. it appears to be dead. Secure boot function does not affect user passwords, but is rather a communication layer between the hardware and OS and as such does not affect this problem.
I don't think you can "tune" the sector size of your drive. Intel only provides a way to switch between the very small "legacy" 512 bytes sector size and the "new industry standard" of 4096 bytes per sector. This is probably only for people having compatibility or performance issues with the new size.
Indeed, nowadays, several filesystems can use a 4 kB fs block size.
So, what you can sometimes do, is to use a customized filesystem block size value. You may want to try 8 kB here, if possible. You may only be able to choose this at formatting time.
Even if you can't, you may be able to tweak other fs parameters which could help. The ext4 filesystem, for example, only handles 1, 2 and 4 kB blocks but can use the stride
and stripe_width
parameters to change it's behavior in a way that could improve performance on SSDs or RAID arrays.
Note that making things worse by tweaking default parameters without a good understanding of the way things work is easy. I suggest you test performance of different configuration.
Best Answer
You need to read the question properly.
He is asking about RAID0 STRIPE, not RAID1 MIRROR.
My answer: YES you will have a significant speed improvement.
ref: http://staff.science.uva.nl/~delaat/rp/2009-2010/p30/presentation.pdf
Speed: My workstations do run Linux Mint using software raid (mdadm) and I do run 4 drives in a stripe having XFS as filesystem. Once you sit on such workstation, You do not want to turn back to the old days with ONE platter drive.
Backup your workstation daily with incremental backup, weekly a full backup just in case one SSD crashes.
Your speed is great but if ONE ssd does crash You loose a lot of data. So You are warned.
Backup and use cloud to store additionally files.
Storage: My NAS is purely running FreeBSD ZFS ZRAID2 for storage with 2+4 drives of 3TB, so I have 12TB and 2 drives of 3TB do provide redundancy, so I can loose 2 drives at a time without loosing data. My NAS does run on regular drives.
ZFS is currently the best filesystem for disks, for sure for storage. You can look for FreeBSD or a dedicated NAS software solution such as FreeNAS, ZFSguru, NexentaStor ... I did choose ZFSguru because I do like to teweak the FreeBSD system. I use iSCSI and SMB/NFS shares on it.
Servers:
My favorite is to use platters for ZFS and use SSD for ZIL in ZFS. But it is dark art.
NOTE 1:
Try to avoid hardware raids, in case of failure You need to have the same hardware again. Do not use the cheap raid controllers on the customer motherboards. Try to use software raid supported by the OS, just for sake of recovery, as the OS has more ways to deal with raid as most crappy raid software in those hardware controllers.
NOTE 2:
When using ZFS avoid at all costs hardware raid controllers. Look for motherboards with enough SATA ports to connect Your drives. There are dedicated controllers to without raid functionality.
Setup the raid using ZFS
NOTE 3:
SSDs no longer scale after 4 disks HDDs continue to scale after 5 disks
NOTE 4:
There are different types of SSD
You have SSD SLC and MLC. The first are the most expensive but the fastest and the best for heavy read/write operations.