Ubuntu – Improving mdadm RAID-6 write speed


I have a mdadm RAID-6 in my home server of 5x1Tb WD Green HDDs.
Read speed is more than enough – 268 Mb/s in dd.
But write speed is just 37.1 Mb/s.
(Both tested via dd on 48Gb file, RAM size is 1Gb, block size used in testing is 8kb)

Could you please suggest why write speed is so low and is there any ways to improve it?
CPU usage during writing is just 25% (i.e. half of 1 core of Opteron 165)
No business critical data there & server is UPS-backed.

mdstat is:

Personalities : [raid6] [raid5] [raid4]
md0 : active raid6 sda1[0] sdd1[4] sde1[3] sdf1[2] sdb1[1]
      2929683456 blocks super 1.2 level 6, 1024k chunk, algorithm 2 [5/5] [UUUUU]
      bitmap: 0/8 pages [0KB], 65536KB chunk

unused devices: <none>

Any suggestions?

Things like writeback, barrier, nobh didn't helped. DD blocksize=1M, 8M didn't changed anything. It looks like mdadm physically reads sectors to calculate parity even if it does not matter… Is that correct?

Update: Speed degradation after altered stripe cache was actually because 1 HDD probably failed during testing, nice 😀

Resolved: After increasing stripe cache & switching to external bitmap, my speeds are 160 Mb/s writes, 260 Mb/s reads. 😀

Best Answer

Have you tried tuning /sys/block/mdX/md/stripe_cache_size?

According to this forum post (in Norwegian, sorry) "tuning this parameter is more essential the more disks and the faster system you have":

On my system I get the best performance using the value 8192. If I use the default value of 256 the write performance drops 66%.

Quoting his speed for comparison:

Disks: 8xSeagate 2TB LP (5900RPM) in mdadm RAID6 (-n 512) (stripe_size_cache=8192).

CPU: Intel X3430 (4x2.4GHz, 8GB DDR3 ECC RAM)

Speed: 387 MB/s sequential write, 704 MB/s sequential read, 669 random seeks per sec.

My home server has almost the same disks as you, using RAID 5:

Disks: 4x1.5TB WD Green in RAID 5 (stripe_size_cache=256 - the default)

CPU: Intel i7 920 (2.66 GHz, 6 GB RAM)

Speed: 60 MB/s sequential write, 138 MB/s sequential read (according to Bonnie++)

So it looks like sequential write performance is around 50% of read performance.

For what performance to expect, the Linux Raid Wiki says about RAID 5:

Reads are almost similar to RAID-0 reads, writes can be either rather expensive (requiring read-in prior to write, in order to be able to calculate the correct parity information, such as in database operations), or similar to RAID-1 writes (when larger sequential writes are performed, and parity can be calculated directly from the other blocks to be written).

And about RAID 6:

Read performance is similar to RAID-5 but write performance is worse.

Related Question