Freebsd – Replacing disk when using FreeBSD ZFS zroot (ZFS on partition)

freebsdzfs

How to replace broken disk with new when using ZFS on root?

I have 4 disk RAIDZ2 pool using zroot. This means that ZFS is running on separate partition instead of using whole disk. I didn't find any documentation on how to replace disk in this situation or information was deprecated. Pool was generated by the installation automatically.

Camcontrol device list:

% doas camcontrol devlist -v
scbus0 on mpt0 bus 0:
<>                                 at scbus0 target -1 lun ffffffff ()
scbus1 on ahcich0 bus 0:
<>                                 at scbus1 target -1 lun ffffffff ()
scbus2 on ahcich1 bus 0:
<>                                 at scbus2 target -1 lun ffffffff ()
scbus3 on ahcich2 bus 0:
<ST2000DM001-1CH164 CC43>          at scbus3 target 0 lun 0 (pass0,ada0)
<>                                 at scbus3 target -1 lun ffffffff ()
scbus4 on ahcich3 bus 0:
<ST2000DM001-1CH164 CC43>          at scbus4 target 0 lun 0 (pass1,ada1)
<>                                 at scbus4 target -1 lun ffffffff ()
scbus5 on ahcich4 bus 0:
<ST2000DM001-1CH164 CC43>          at scbus5 target 0 lun 0 (pass2,ada2)
<>                                 at scbus5 target -1 lun ffffffff ()
scbus6 on ahcich5 bus 0:
<SAMSUNG HD204UI 1AQ10001>         at scbus6 target 0 lun 0 (pass3,ada3)
<>                                 at scbus6 target -1 lun ffffffff ()
scbus7 on ahciem0 bus 0:
<AHCI SGPIO Enclosure 1.00 0001>   at scbus7 target 0 lun 0 (pass4,ses0)
<>                                 at scbus7 target -1 lun ffffffff ()
scbus-1 on xpt0 bus 0:
<>                                 at scbus-1 target -1 lun ffffffff (xpt0)

gpart of existing disk:

% gpart show ada0
=>        40  3907029088  ada0  GPT  (1.8T)
          40        1024     1  freebsd-boot  (512K)
        1064         984        - free -  (492K)
        2048     4194304     2  freebsd-swap  (2.0G)
     4196352  3902832640     3  freebsd-zfs  (1.8T)
  3907028992         136        - free -  (68K)

zpool status:

% zpool status zroot
  pool: zroot
 state: DEGRADED
status: One or more devices has been removed by the administrator.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Online the device using 'zpool online' or replace the device with
        'zpool replace'.
  scan: scrub repaired 28K in 0h41m with 0 errors on Thu Sep 27 17:58:02 2018
config:

        NAME                      STATE     READ WRITE CKSUM
        zroot                     DEGRADED     0     0     0
          raidz2-0                DEGRADED     0     0     0
            ada0p3                ONLINE       0     0     0
            ada1p3                ONLINE       0     0     0
            ada2p3                ONLINE       0     0     0
            15120424524672854601  REMOVED      0     0     0  was /dev/ada3p3

errors: No known data errors

Offline:

% doas zpool offline zroot 15120424524672854601

I tried to copy first few GiB to ada3 from ada0 with dd but both zpool attach and zpool replace gives error: /dev/ada3p3 is part of active pool 'zroot' and even force flag doesn't help. I'm guessing disk UUID's are colliding.

What are the steps of how to copy/replicate ada0-2p1-3 partitions to the new disk (ada3) and replace the faulted drive? What commands did the automated installer run to create these partitions in first place?

Best Answer

First: remember to take the new drive offline and be sure that it's not mounted or in use in any way.

Copy partition tables from old disk ada0 to new disk ada3:

% doas gpart backup ada0 | doas gpart restore -F ada3

Now ada3 has same three partitions as ada0:

% doas gpart show ada3
=>        40  3907029088  ada3  GPT  (1.8T)
          40        1024     1  freebsd-boot  (512K)
        1064         984        - free -  (492K)
        2048     4194304     2  freebsd-swap  (2.0G)
     4196352  3902832640     3  freebsd-zfs  (1.8T)
  3907028992         136        - free -  (68K)

Remove old ZFS metadata (notice partition p3):

% doas dd if=/dev/zero of=/dev/ada3p3

Replace drive (notice partition p3):

% doas zpool replace -f zroot 15120424524672854601 /dev/ada3p3
Make sure to wait until resilver is done before rebooting.

If you boot from pool 'zroot', you may need to update
boot code on newly attached disk '/dev/ada3p3'.

Assuming you use GPT partitioning and 'da0' is your new boot disk
you may use the following command:

        gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 da0

Run the mentioned command to update boot information on the new disk:

% doas gpart bootcode -b /boot/pmbr -p /boot/gptzfsboot -i 1 ada3
partcode written to ada3p1
bootcode written to ada3

UUID's are now different:

% gpart list ada0 | grep uuid | sort
   rawuuid: 7f842536-bcd0-11e8-b271-00259014958c
   rawuuid: 7fbe27a9-bcd0-11e8-b271-00259014958c
   rawuuid: 7fe24f3e-bcd0-11e8-b271-00259014958c
% gpart list ada3 | grep uuid | sort
   rawuuid: 9c629875-c369-11e8-a2b0-00259014958c
   rawuuid: 9c63d063-c369-11e8-a2b0-00259014958c
   rawuuid: 9c66f76e-c369-11e8-a2b0-00259014958c
% gpart list ada0 | grep efimedia | sort
   efimedia: HD(1,GPT,7f842536-bcd0-11e8-b271-00259014958c,0x28,0x400)
   efimedia: HD(2,GPT,7fbe27a9-bcd0-11e8-b271-00259014958c,0x800,0x400000)
   efimedia: HD(3,GPT,7fe24f3e-bcd0-11e8-b271-00259014958c,0x400800,0xe8a08000)
% gpart list ada3 | grep efimedia | sort
   efimedia: HD(1,GPT,9c629875-c369-11e8-a2b0-00259014958c,0x28,0x400)
   efimedia: HD(2,GPT,9c63d063-c369-11e8-a2b0-00259014958c,0x800,0x400000)
   efimedia: HD(3,GPT,9c66f76e-c369-11e8-a2b0-00259014958c,0x400800,0xe8a08000)

Drive is now resilvering:

% zpool status zroot
  pool: zroot
 state: DEGRADED
status: One or more devices is currently being resilvered.  The pool will
        continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
  scan: resilver in progress since Sat Sep 29 01:01:24 2018
        64.7G scanned out of 76.8G at 162M/s, 0h1m to go
        15.7G resilvered, 84.22% done
config:

        NAME                        STATE     READ WRITE CKSUM
        zroot                       DEGRADED     0     0     0
          raidz2-0                  DEGRADED     0     0     0
            ada0p3                  ONLINE       0     0     0
            ada1p3                  ONLINE       0     0     0
            ada2p3                  ONLINE       0     0     0
            replacing-3             OFFLINE      0     0     0
              15120424524672854601  OFFLINE      0     0     0  was /dev/ada3p3/old
              ada3p3                ONLINE       0     0     0

After resilver:

% zpool status zroot
  pool: zroot
 state: ONLINE
  scan: resilvered 18.6G in 0h7m with 0 errors on Sat Sep 29 01:09:22 2018
config:

        NAME        STATE     READ WRITE CKSUM
        zroot       ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            ada0p3  ONLINE       0     0     0
            ada1p3  ONLINE       0     0     0
            ada2p3  ONLINE       0     0     0
            ada3p3  ONLINE       0     0     0

errors: No known data errors
Related Question