You made a very long list which I am not going to answer one by one. However I want to make these things very clear:
1) PCI can not sustain those speeds. PCI express can, it is a totally different technology with point to point links (called lanes) instead of a shared bus. The card you linked to is "
PCIe x4". The extra e
is very much relevant.
2) Stripes (RAID 0, RAID10 etc etc) is quite possible. Either with a dozen high performance disks. Or you could use normal disks. An office corner shop, bog standard 7200 RPM SATA drive will do about 100MB/sec. So you would need at least a dozen of these (since things never scale quite perfect).
3) Both HW RAID, software RAID and Fake RAID (software RAID with BIOS supports, e.g. Intel IRSST) will work.
Software RAID is not recommended if you do not to do a lot of calculations (e.g. RAID6) and need high performance or have a slow CPU.
Hardware RAID will vary. A good HW RAID card is great. A bad one might perform quite poorly compared to a good SW RAID solution. Good HW RAID often needs battery backed cache or flash to enable the fast modes.
4) SATA II or III (3.0 or 6.0 GB/sec) or SAS 3GBIT/sec, SAS 6GBIT SEC, ... does not matter. And individual spinning disk will not saturate any of these links. Current consumer SATA drives max out around 100MB/sec. High end enterprise SAS drives can get up to 200MB/sec. Both speeds are lower than 3.0GB/sec.
5) RAID0 is not very safe. If one disk fails, you loose all. This might be acceptable if you just need to test things and save the data. And them immediately save it somewhere safe of process it. However a the more disks you use the more disks can fail.
RAID is usually about redundancy. RAID0 is not, it is solely about performance.
6) Lastly for completeness sake: SSD is not inherently bad for this. For this much data they will be expensive and possibly not needed, but an SSD does not need to slow down. Just completely wipe the SSD (e.g. delete all partitions, or secure erase it) before you add it to recording array. Once it is full it may slow down. But properly prep it and run it for one session and it should be fine.
7)
AHCI is the only BIOS setting relevant to fast sequential writes (turn on SMART too).
You can not turn SMART on or off. It is always on on the drive. The option in the BIOS just means 'read the drives SMART data when you POST and if there is anything wrong then warn the user. Usually with a single line like 'SMART: DISK FAILURE IMMINENT. Press F1 to continue!". It has no performance influence.
Set both of these before installing Windows.
For consistent performance: Install the OS on its own drive. Keep separate volumes for OS and for data.
8)
T sustain 1GB/sec indefinitely, you need >3 7200RPM 6Gb/sec SATA drives
(6Gb/sec * 1/8 GB/Gb = .75 GB/sec/drive with no headroom).
No.
A 6GBit/sec data link SATA drive will be able to transfer roughly 300MiB/sec between disk and controller/RAID card. (6.0 divided by 8 for bit-to-bytes, but there is also some overhead and a /10
is more realistic).
Secondly the drive will be able to receive the data quite quickly, but writing it to a disk will be slower. A realistic value for a modern 7200 RPM SATA drive is 100MiB/sec sustained write.
That means you need at least 10 such drives. And only if everything scales perfectly.
More drives will improve your bandwidth headroom linearly, but saturate after the
data width of your bus (32 or 64).
True for PCI. But despite writing PCI the OP meant PCI-e, which is a lot faster.
4 lanes PCI-e v2 is up to 10Gbit/sec. That should be enough (though there is not much headroom).
Best Answer
This is a velociraptor. As you may notice, it's a 1tb, 2.5 inch drive inside a massive heatsink meant to cool it down. In essence, it's an 'overclocked' 2.5 inch drive. You end up having the worst of all worlds. It's not as fast at random reads/writes as an SSD in many cases, it doesn't match the storage density of a 3.5 inch drive (which goes up to 3-4 tb on consumer drives, and there's 6 tb and bigger enterprise drives).
An SSD would run cooler, have better random access speeds, and probably have better performance, especially where the equivalent SSD, while costlier, is likely to be a higher end one, and SSDs generally have better speeds as they get bigger.
A normal HDD would also run cooler, have better storage density (With the same 1tb space fitting into a 2.5 inch slot easily), and cost per mb/gb would be lower. You might also have the option of running these as a raid array to make up for the performance deficiencies.
The comments also indicate that these hard drives are loud in general - SSDs have no moving parts (so, they are silent in normal operation), and my 7200 RPM drives seem quiet enough. Its something worth considering when building a system for personal use.
Taking all this into account, with a sensible planned upgrade path, and endurance tests demolishing the myth that SSDs die early, I wouldn't think so. The thinking enthusiast would use an SSD for boot, OS and software, and a regular spinning hard drive for bulk storage, rather than picking something that tries to do everything, but doesn't do it quite as well, or cheaply.
As an aside, in many cases, 10K RPM enterprise drives are getting replaced by SSDs, especially for things like databases.