[ZFS On Linux] What to Do When Resilver Takes Very Long

If you check your Zpool health status and you notice an error like the following:

sudo zpool status
  pool: myzpool
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
        repaired.
  scan: resilvered 49.6G in 0 days 00:11:25 with 0 errors on Fri Jan 10 15:52:05 2020
config:

        NAME                                            STATE     READ WRITE CKSUM
        myzpool                                         DEGRADED     0     0     0
          raidz1-0                                      DEGRADED     0     0     0
            ata-TOSHIBA_0001                            ONLINE       0     0     0
            ata-TOSHIBA_0002                            ONLINE       0     0     0
            ata-TOSHIBA_0003                            FAULTED     36     0     0  too many errors
        cache
          nvme0n1                                       ONLINE       0     0     0

There are two possibilities: hardware error or software error. I will perform the following to identify whether it is hardware or software error.

  1. Check whether the disk is missing from the system or not. You can do it by running fdisk -l. If the disk is available, try to clear the ZFS status. If the disk is missing, try to reboot the system.
  2. If the disk is still missing after reboot the system, try to replace the hard drive cable.
  3. Once the ZFS sees all disks, try to run zpool clear myzpool. This will force the ZFS to resilver the pool. If the pool is running at 100MB/s or above, it sounds like a false alarm. You may stop here.

Assuming that it is hardware related error. Typically you can do the following:

  • Replace the SATA / SAS cable
  • Replace the hard drive
  • In the BIOS settings, change the write mode from SATA/IDE to AHCI

If you replace the hard drive, you will need to resilver the pool. If it is hardware error, the pool will read/write the data at least 100MB/s. Depending on the size of data on your faulty disk, it should take no more than 3 days to finish the entire process. Wait until the process is finished. If it gives no error, you may stop here.

sudo zpool status
  pool: myzpool
 state: ONLINE
  scan: resilvered 215M in 0 days 00:00:04 with 0 errors on Mon Jan 13 18:24:48 2020
config:

        NAME                                            STATE     READ WRITE CKSUM
        myzpool                                         ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            ata-TOSHIBA_0001                            ONLINE       0     0     0
            ata-TOSHIBA_0002                            ONLINE       0     0     0
            ata-TOSHIBA_0003                            ONLINE       0     0     0
        cache
          nvme0n1                                       ONLINE       0     0     0

errors: No known data errors

So you may be in one or more of the following situations:

  • Your hardware is consumer level, e.g., the motherboard is not server grade, or the hard drive is designed for general purposes rather than 24/7.
  • You have replaced the hard drive, and the resilver process is very slow (e.g., 5-15MB/s). The ZFS cannot even give you an estimated finish time.
  • The resilver estimated end time keeps being delayed, and it seems taking forever. For example, suppose ZFS estimates that the entire process may take 10 hours to finish. After 5 hours, it says 9 more hours to go, or once it reaches 99.9%, it starts the entire process again.
  • When you run dmesg, you see a lot of hardware related error, e.g.,
    ata2.00: status: { DRDY }
    ata2.00: failed command: READ FPDMA QUEUED
    ata2.00: cmd 60/78:e0:f0:70:77/00:00:39:00:00/40 tag 28 ncq 61440 in
    sd 1:0:0:0: [sdb] tag#25 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
    sd 1:0:0:0: [sdb] tag#25 Sense Key : Illegal Request [current] [descriptor]
    sd 1:0:0:0: [sdb] tag#25 Add. Sense: Unaligned write command
    
  • ZFS shows multiple disks are faulty. However their states become online after rebooting the system.

If you are in any of these situations, there are multiple things you can do:

If this is an important server and you don’t have any backup data, just wait. There is nothing you can do.

If you have backup data, try to destroy the pool and rebuild again. The problem is that your Linux or ZFS does not like the current configurations. It is software rather than hardware issue. By rebuilding the entire zpool, everything will start over again. This will save you many days, weeks or even months of waiting time. In my situation, I had spent 2 months to resilver the data on my secondary backup server. After letting it to resilver for 2 months, I decided to rebuild the entire ZFS pool (using exact the same hardware) and loaded the data from my production server. It took less than a week to fill 50TB of data and the dmesg is clear of error message.

Of course, sometimes the hard drive is faulty. We can perform a simple test with the following commands. First, try to link it to another server (or USB enclosure) and run the following (replace sdX with the actual hard drive identifier):

sudo smartctl -a /dev/sdX | grep result

#Bad hard drive
SMART overall-health self-assessment test result: FAILED!

#Good hard drive
SMART overall-health self-assessment test result: PASSED

Next, we perform a more intensive test. This will involve wiping your entire hard drive (writing data to every sector):

nohup sudo dd if=/dev/zero of=/dev/sdX bs=1M status=progress > dd.log &

#This may take few days. You can check the progress this way:
sudo tail dd.log

Once the process is done, try to run dmesg | grep sdX. If the hard drive is faulty, you will definitely see lots of error messages. In my case, pretty much all of the hard drives give no error. What does it mean? It means the ZFS system doesn’t like those hard drives. All I can say is that my ZFS is up and running (and error free) after rebuilding the entire pool, using exact the same hard drives and cables.

I only experience this kind of issue with ZFS on Linux. ZFS on FreeBSD does not have this problem.

Our sponsors:

[ZFS On Linux]How to Update Linux Kernel without Rebooting the System

As of Jan 2020, I manage 65 Linux + ZFS servers. Normally, I prefer to reboot each server after updating its kernel (according to Ret Hat, most updates are related to security fix). Without ZFS, it is not a big issue because rebooting a basic Linux server takes about 30 seconds. However with ZFS, it can take more than 60 seconds if the ZFS dataset is large (It takes time to unload and load the ZFS configurations). So I decide to experiment a new idea: Updating the kernel without rebooting the server. Keep in mind that this is not magic. This method will still introduce downtime, but it is much shorter comparing to rebooting the server. Base on my experience, it cuts about half of the downtime.

Before you try it on a production server, I highly recommend you to try it on a test server/VM first. If your server is a VM host, please be aware of the VM guests may get shut down after upgrade. You will need to wait the system to rebuild the VM modules with the new kernel headers first, then restart the VM guests.

We will use kexec:

sudo yum install kexec-tools -y

Update the kernel, ZFS and DKMS modules

sudo yum update -y

Assuming that you are running an older kernel:

uname -a
3.10.0-1062.4.1.el7.x86_64

If you open /boot/, you will notice that there are many newer kernels available:

ls -al /boot/  | grep x86_64
-rw-r--r--   1 root root 150K Oct 18 12:19 config-3.10.0-1062.4.1.el7.x86_64
-rw-r--r--   1 root root 150K Nov 13 18:02 config-3.10.0-1062.4.3.el7.x86_64
-rw-r--r--   1 root root 150K Dec  2 11:37 config-3.10.0-1062.7.1.el7.x86_64
-rw-r--r--   1 root root 150K Dec  6 09:53 config-3.10.0-1062.9.1.el7.x86_64
-rw-------   1 root root  30M Dec 13 00:03 initramfs-3.10.0-1062.4.1.el7.x86_64.img
-rw-------   1 root root  13M Oct 22 15:41 initramfs-3.10.0-1062.4.1.el7.x86_64kdump.img
-rw-------   1 root root  30M Nov 16 00:07 initramfs-3.10.0-1062.4.3.el7.x86_64.img
-rw-------   1 root root  30M Dec  4 00:10 initramfs-3.10.0-1062.7.1.el7.x86_64.img
-rw-------   1 root root  30M Dec  7 00:14 initramfs-3.10.0-1062.9.1.el7.x86_64.img
-rw-r--r--   1 root root 312K Oct 18 12:19 symvers-3.10.0-1062.4.1.el7.x86_64.gz
-rw-r--r--   1 root root 312K Nov 13 18:03 symvers-3.10.0-1062.4.3.el7.x86_64.gz
-rw-r--r--   1 root root 312K Dec  2 11:37 symvers-3.10.0-1062.7.1.el7.x86_64.gz
-rw-r--r--   1 root root 312K Dec  6 09:53 symvers-3.10.0-1062.9.1.el7.x86_64.gz
-rw-------   1 root root 3.5M Oct 18 12:19 System.map-3.10.0-1062.4.1.el7.x86_64
-rw-------   1 root root 3.5M Nov 13 18:02 System.map-3.10.0-1062.4.3.el7.x86_64
-rw-------   1 root root 3.5M Dec  2 11:37 System.map-3.10.0-1062.7.1.el7.x86_64
-rw-------   1 root root 3.5M Dec  6 09:53 System.map-3.10.0-1062.9.1.el7.x86_64
-rwxr-xr-x   1 root root 6.5M Oct 18 12:19 vmlinuz-3.10.0-1062.4.1.el7.x86_64
-rw-r--r--   1 root root  171 Oct 18 12:19 .vmlinuz-3.10.0-1062.4.1.el7.x86_64.hmac
-rwxr-xr-x   1 root root 6.5M Nov 13 18:02 vmlinuz-3.10.0-1062.4.3.el7.x86_64
-rw-r--r--   1 root root  171 Nov 13 18:02 .vmlinuz-3.10.0-1062.4.3.el7.x86_64.hmac
-rwxr-xr-x   1 root root 6.5M Dec  2 11:37 vmlinuz-3.10.0-1062.7.1.el7.x86_64
-rw-r--r--   1 root root  171 Dec  2 11:37 .vmlinuz-3.10.0-1062.7.1.el7.x86_64.hmac
-rwxr-xr-x   1 root root 6.5M Dec  6 09:53 vmlinuz-3.10.0-1062.9.1.el7.x86_64
-rw-r--r--   1 root root  171 Dec  6 09:53 .vmlinuz-3.10.0-1062.9.1.el7.x86_64.hmac

Pick the newest one. In the other words, we will do the following:

From: 3.10.0-1062.4.1.el7.x86_64
To: 3.10.0-1062.9.1.el7.x86_64

Before we begin, we want to make sure that all of the ZFS / dkms modules have been installed. Make sure that the latest one (3.10.0-1062.9.1.el7) is available:

sudo dkms status
zfs, 0.8.2, 3.10.0-1062.4.1.el7.x86_64, x86_64: installed
zfs, 0.8.2, 3.10.0-1062.4.3.el7.x86_64, x86_64: installed
zfs, 0.8.2, 3.10.0-1062.7.1.el7.x86_64, x86_64: installed
zfs, 0.8.2, 3.10.0-1062.9.1.el7.x86_64, x86_64: installed

Keep in mind that my current system is still running the old kernel (3.10.0-1062.4.1.el7.x86_64):

uname -a
3.10.0-1062.4.1.el7.x86_64

modinfo zfs | grep version
version:        0.8.2-1
rhelversion:    7.7
srcversion:     29C160FF878154256C93164
vermagic:       3.10.0-1062.4.1.el7.x86_64 SMP mod_unload modversions

Now, we will use kexec to load the new kernel. Please replace the kernel version with the latest one in your system.

sudo kexec -u
sudo kexec -l /boot/vmlinuz-3.10.0-1062.9.1.el7.x86_64 --initrd=/boot/initramfs-3.10.0-1062.9.1.el7.x86_64.img  --reuse-cmdline

After running the following command, it will introduce downtime. Based on my experience, it should be no longer than 30 seconds. However, I recommend you to test it using a non-production server first.

sudo systemctl kexec

During the update, your remote session may be ended. After waiting for 15-30s, try to connect to server again.

Verify the kernel has been updated:

uname -a
3.10.0-1062.9.1.el7.x86_64

modinfo zfs | grep version
version:        0.8.2-1
rhelversion:    7.7
srcversion:     29C160FF878154256C93164
vermagic:       3.10.0-1062.9.1.el7.x86_64 SMP mod_unload modversions

Clean up the old kernels:

sudo package-cleanup --oldkernels --count=1 -y; 
sudo dkms remove zfs/0.8.2 -k 3.10.0-1062.4.1.el7.x86_64;
sudo dkms remove zfs/0.8.2 -k 3.10.0-1062.4.3.el7.x86_64;
sudo dkms remove zfs/0.8.2 -k 3.10.0-1062.7.1.el7.x86_64;
sudo dkms status;

Now your system is good to go.

Our sponsors: