QNAP TS 410, is it rebuilding RAID

qnapraid-5

The QNAP TS-410 had a failed disk the other day and went into degrade mode. So I bought a new disk. Former disks were Seagate, but I bought a Western Digital now which it approved by QNAP in its database of supported drives, its the same size so it shouldn't matter right? So now I have 3 seagate and 1 wd. I hot swapped the new and old disks and the system log said

[RAID5 Disk Volume: Drive 1 2 3 4] Start rebuilding

but I can't see any indication in the web interface that the rebuild is happening, there is no progress bar anywhere, but the light on front of the unit is blinking red/green indicating it is rebuilding. Is this normal or is there something strange going on? Is there some way I can check with command line through ssh that the rebuild is happening?

Also under Control Panel -> Storage Manager -> Volume Management (in the QNAP web interface, not windows control panel) the new drive has a "Disk read/write error" under status but SMART information says its good.

I have been fiddling with this for some time now and I tried to do a scan on the new drive, that took about a day to finish and after that the status went to Ready but still no indication that the RAID rebuild was happening (except for this log entry). I restarted the QNAP and the new drive got the "Disk read/write error" status again and the log said again it was rebuilding the RAID.

The top bar of the web interface has a button showing background processes but there is nothing shown there, so the rebuild is not a background process.

If I go to Storage Manager -> RAID Management and select the RAID then the Action button is grayed out so I can't perform any actions on the RAID, I guess this is because it's in degraded mode and mounted as read only.

So I am confused, is the RAID being rebuilt or isn't it? And if its not being rebuilt, is there some way I can force the rebuild? Or is that not a good idea?

This QNAP has firmware 4.1.1 Build 20140927 if that matters.

cat /proc/mdstat gives me the following output:

Personalities : [raid1] [linear] [raid0] [raid10] [raid6] [raid5] [raid4]
md0 : active (read-only) raid5 sda3[0] sdc3[2] sdb3[1]
             5855836800 blocks level 5, 64k chunk, algorithm 2 [4/3] [UUU_]

md4 : active raid1 sdd2[2](F) sdc2[3](S) sdb2[1] sda2[0]
             530048 blocks [2/2] [UU]

md13 : active raid1 sda4[0] sdd4[3] sdc4[2] sdb4[1]
             458880 blocks [4/4] [UUUU]
             bitmap: 0/57 pages [0KB], 4KB chunk

md9 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
             530048 blocks [4/4] [UUUU]
             bitmap: 4/65 pages [16KB], 4KB chunk

unused devices: <none>

As can be seen in md0 the last drive is not in the RAID array (UUU_ the last underscore should be U if the drive was in the RAID as far as understand.

Best Answer

mdadm --misc --detail /dev/md0 will show you the status and the progress of the rebuild

E.g.

# mdadm --misc --detail /dev/md0
/dev/md0:
        Version : 00.90.03
  Creation Time : Tue Sep 28 21:28:33 2010
     Raid Level : raid5
     Array Size : 4390708800 (4187.31 GiB 4496.09 GB)
  Used Dev Size : 1463569600 (1395.77 GiB 1498.70 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Sat Jan 21 10:26:49 2017
          State : clean, degraded, recovering
 Active Devices : 3
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 1

         Layout : left-symmetric
     Chunk Size : 64K

 Rebuild Status : 55% complete

           UUID : 454eaf79:0744a748:319e242f:5ff1ef4c
         Events : 0.7528612

    Number   Major   Minor   RaidDevice State
       0       8       35        0      active sync   /dev/sdc3
       4       8        3        1      spare rebuilding   /dev/sda3
       2       8       51        2      active sync   /dev/sdd3
       3       8       19        3      active sync   /dev/sdb3
Related Question