The QNAP TS-410 had a failed disk the other day and went into degrade mode. So I bought a new disk. Former disks were Seagate, but I bought a Western Digital now which it approved by QNAP in its database of supported drives, its the same size so it shouldn't matter right? So now I have 3 seagate and 1 wd. I hot swapped the new and old disks and the system log said
[RAID5 Disk Volume: Drive 1 2 3 4] Start rebuilding
but I can't see any indication in the web interface that the rebuild is happening, there is no progress bar anywhere, but the light on front of the unit is blinking red/green indicating it is rebuilding. Is this normal or is there something strange going on? Is there some way I can check with command line through ssh that the rebuild is happening?
Also under Control Panel -> Storage Manager -> Volume Management (in the QNAP web interface, not windows control panel) the new drive has a "Disk read/write error" under status but SMART information says its good.
I have been fiddling with this for some time now and I tried to do a scan on the new drive, that took about a day to finish and after that the status went to Ready but still no indication that the RAID rebuild was happening (except for this log entry). I restarted the QNAP and the new drive got the "Disk read/write error" status again and the log said again it was rebuilding the RAID.
The top bar of the web interface has a button showing background processes but there is nothing shown there, so the rebuild is not a background process.
If I go to Storage Manager -> RAID Management and select the RAID then the Action button is grayed out so I can't perform any actions on the RAID, I guess this is because it's in degraded mode and mounted as read only.
So I am confused, is the RAID being rebuilt or isn't it? And if its not being rebuilt, is there some way I can force the rebuild? Or is that not a good idea?
This QNAP has firmware 4.1.1 Build 20140927 if that matters.
cat /proc/mdstat
gives me the following output:
Personalities : [raid1] [linear] [raid0] [raid10] [raid6] [raid5] [raid4]
md0 : active (read-only) raid5 sda3[0] sdc3[2] sdb3[1]
5855836800 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_]
md4 : active raid1 sdd2[2](F) sdc2[3](S) sdb2[1] sda2[0]
530048 blocks [2/2] [UU]
md13 : active raid1 sda4[0] sdd4[3] sdc4[2] sdb4[1]
458880 blocks [4/4] [UUUU]
bitmap: 0/57 pages [0KB], 4KB chunk
md9 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
530048 blocks [4/4] [UUUU]
bitmap: 4/65 pages [16KB], 4KB chunk
unused devices: <none>
As can be seen in md0 the last drive is not in the RAID array (UUU_ the last underscore should be U if the drive was in the RAID as far as understand.
Best Answer
mdadm --misc --detail /dev/md0
will show you the status and the progress of the rebuildE.g.