[ZFS]How to repair a ZFS pool if one device was damaged

Today, I accidentally dd’ed a disk which was part of an active ZFS pool on my test server. I dd’ed the first and the last 10 sectors of the disk. Technically I didn’t lose any data because my ZFS configuration was RAIDZ. However once I rebooted my computer, my ZFS complained:

#This is what I did:
sudo dd if=/dev/zero of=/dev/sda bs=512 count=10
sudo dd if=/dev/zero of=/dev/sda bs=512 seek=$(( $(blockdev --getsz /dev/sda) - 4096 )) count=1M
sudo zpool status
  pool: storage
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-4J
  scan: resilvered 2.40T in 1 days 00:16:34 with 0 errors on Fri Nov 13 20:05:53 2020
config:

        NAME                                 STATE     READ WRITE CKSUM
        storage                              DEGRADED     0     0     0
          raidz1-0                           DEGRADED     0     0     0
            ata-ST4000DM000-1F2168_S30076XX  ONLINE       0     0     0
            ata-ST4000DX001-1CE168_Z3019CXX  ONLINE       0     0     0
            ata-ST4000DM000-2AE166_WDH0S9YY  ONLINE       0     0     0
            ata-ST4000DM000-2AE166_WDH0SXZZ  ONLINE       0     0     0
            ata-ST4000DM000-2AE166_WDH0SXDD  ONLINE       0     0     0
            412403026512446213               UNAVAIL      0     0     0  was /dev/disk/by-id/ata-ST4000NM0033-9ZM170                                                                                                                         _Z1Z3RR74-part1

So I checked the problematic device, and I see the problem:

ls /dev/disk/by-id/

#This is normal disk:
lrwxrwxrwx 1 root root  10 Nov 13 20:58 ata-ST4000DX001-1CE168_Z3019CXX-part1 -> ../../sdd1
lrwxrwxrwx 1 root root  10 Nov 13 20:58 ata-ST4000DX001-1CE168_Z3019CXX-part9 -> ../../sdd9


#This is the problematic disk, part1 and part9 are missing.
lrwxrwxrwx 1 root root   9 Nov 13 20:58 ata-ST4000NM0033-9ZM170_Z1Z3RR74 -> ../../sdf

It is pretty easy to fix this problem. All you need is to bring the device offline and bring it back.

#First, offline the problematic device:
sudo zpool offline storage 412403026512446213
sudo zpool status
  pool: storage
 state: DEGRADED
status: One or more devices could not be used because the label is missing or
        invalid.  Sufficient replicas exist for the pool to continue
        functioning in a degraded state.
action: Replace the device using 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-4J
  scan: resilvered 2.40T in 1 days 00:16:34 with 0 errors on Fri Nov 13 20:05:53 2020
config:

        NAME                                 STATE     READ WRITE CKSUM
        storage                              DEGRADED     0     0     0
          raidz1-0                           DEGRADED     0     0     0
            ata-ST4000DM000-1F2168_S30076XX  ONLINE       0     0     0
            ata-ST4000DX001-1CE168_Z3019CXX  ONLINE       0     0     0
            ata-ST4000DM000-2AE166_WDH0S9YY  ONLINE       0     0     0
            ata-ST4000DM000-2AE166_WDH0SXZZ  ONLINE       0     0     0
            ata-ST4000DM000-2AE166_WDH0SXDD  ONLINE       0     0     0
            412403026512446213               OFFLINE      0     0     0
#Then bring back the device:
sudo zpool online ata-ST4000NM0033-9ZM170_Z1Z3RR74 

#Resilver it
sudo zpool scrub storage

sudo zpool status
  pool: storage
 state: ONLINE
  scan: resilvered 36K in 0 days 00:00:01 with 0 errors on Fri Nov 13 21:03:01 2020
config:

        NAME                                  STATE     READ WRITE CKSUM
        storage                               ONLINE       0     0     0
          raidz1-0                            ONLINE       0     0     0
            ata-ST4000DM000-1F2168_S30076XX   ONLINE       0     0     0
            ata-ST4000DX001-1CE168_Z3019CXX   ONLINE       0     0     0
            ata-ST4000DM000-2AE166_WDH0S9YY   ONLINE       0     0     0
            ata-ST4000DM000-2AE166_WDH0SXZZ   ONLINE       0     0     0
            ata-ST4000DM000-2AE166_WDH0SXDD   ONLINE       0     0     0
            ata-ST4000NM0033-9ZM170_Z1Z3RR74  ONLINE       0     0     0

errors: No known data errors

That’s it.

Our sponsors:

[VirtualBox]CentOS 7: NS_ERROR_FAILURE

After I reboot one of my VirtualBox host servers today, I was unable to start the virtual box guests. The error was a popular one: NS_ERROR_FAILURE.

The problem was caused by the kernel mismatch problem. All you need is to rebuild the virtual box library to match with your system kernel. In my case, I had the following:

#This is my Virtual Box version
6.0.16


#This is my Linux kernel:
uname -a
3.10.0-1062.12.1.el7.x86_64


#This is my virtual box modules version:
modinfo vboxdrv
filename:       /lib/modules/3.10.0-514.10.2.el7.x86_64/weak-updates/vboxdrv.ko.xz
version:        5.0.40 r115130 (0x00240000)
license:        GPL
description:    Oracle VM VirtualBox Support Driver
author:         Oracle Corporation
retpoline:      Y
rhelversion:    7.6
srcversion:     3AFDBBC6FDA2CE8CF253D33
depends:
vermagic:       3.10.0-957.1.3.el7.x86_64 SMP mod_unload modversions
parm:           force_async_tsc:force the asynchronous TSC mode (int)

As you can see, the Virtual Box kernel is loaded from a wrong kernel location. Also the Virtual Box is 5.0.40 instead of 6.0.16. In my case, all I need is to rebuild the virtual box library to make it compatible with the Linux kernel. In order to do it, you will need to do the following:

  1. Remove all the old Linux kernels
  2. Remove the Virtual Box modules.
  3. Uninstall the Virtual Box
  4. Reboot
  5. Install the Virtual Box
#Remove all of the old kernels:
sudo package-cleanup --oldkernels --count=1 -y; 


#Remove all except your current modules:
cd /lib/modules/


#Uninstall the Virtual Box
sudo yum remove VirtualBox-6.0


#Reboot
sudo reboot


#Install the Virtual Box
sudo yum install -y VirtualBox-6.0


#Install the Extension Pack (The version number may be different in your case)
wget --no-check-certificate https://download.virtualbox.org/virtualbox/6.0.16/Oracle_VM_VirtualBox_Extension_Pack-6.0.16.vbox-extpack
sudo VBoxManage extpack install --replace Oracle_VM_VirtualBox_Extension_Pack-6.0.16.vbox-extpack


#Start the Virtual Box again

That’s it! Hope it helps!

Our sponsors:

[ZFS On Linux Trouble] This pool uses the following feature(s) not supported by this system…All unsupported features are only required for writing to the pool, zpool create: invalid argument for this pool operation

When I rebooted my computer and loaded my ZFS pool today, I got this error message:

#sudo zpool import -a
This pool uses the following feature(s) not supported by this system:
        org.zfsonlinux:project_quota (space/object accounting based on project ID.)
        com.delphix:spacemap_v2 (Space maps representing large segments are more efficient.)
All unsupported features are only required for writing to the pool.
The pool can be imported using '-o readonly=on'.
cannot import 'my_zpool': unsupported version or feature

On my another machine, I also saw something similar when I tried to create a new pool:

zpool create: invalid argument for this pool operation

This kind of error usually happens when you move your ZFS pool from one system to the other. For example, if your ZFS pool was created in ZFS v10, and you move it to a new system that can only handle ZFS v9, then this error message will show up. Obviously, this is simply not true in my case (and yours too). My system showed me this message after rebooting the server. It had nothing to do with moving the ZFS pool from one to the other. In short, this message is misleading, however it gave me some idea of what was going wrong.

Long story short. It is a known bug of the ZFS on Linux. This kind of problem happens when your Linux kernel is updated every time. If you want to get this resolved, you can only do two things. Never update your system kernel, or never reboot your server (so that the new kernel will not be loaded). If you can’t do any of these, then ZFS on Linux is not for you.

If you need to access your data now, you can mount it as read only, although this is not a long term solution:

sudo zpool import my_zpool -o readonly=on

Another way is to reboot your server to the older working kernel, assuming your old kernel is still available in your system.

So here is the reason why your system could not open your ZFS pool:

  1. You are running Linux kernel ver A and ZFS on Linux ver X, and your system is happy.
  2. A new kernel is release (e.g., ver B). Your system download it and the kernel is sitting under /boot
  3. Later, a new ZFS on Linux (e.g., ver Y) is available. In theory, when upgrading the ZFS on Linux, it supposes to compile the DKMS code with each kernel in the system. In the other words, your current kernel (ver A) and the new pending kernel (ver B) should know how to use the ZFS on Linux (both ver X and Y). Notice that I am using the word: “In theory”. And you probably know that things are not ideal in reality.
  4. So when your system is booted into the new kernel, for some reasons, your new kernel does not have the skill (ZFS on Linux ver Y) to open your ZFS pool, therefore you see that error message.

Here is how to solve the problem. Reinstalling the ZFS and DKMS packages is not going to solve the problem. You will need to rebuild the DKMS modules with your new kernel. First, reboot your computer to the latest kernel first. Here are my versions. Your versions may be different.

Old Kernel: 3.10.0-1062.9.1.el7.x86_64
New Kernel: 3.10.0-1062.12.1.el7.x86_64

Old DKMS ZFS Module:   0.8.2
New DKMS ZFS Module:   0.8.3

Remove your old kernels.

sudo package-cleanup --oldkernels --count=1 -y

Check your current DKMS status. It should contain some error:

sudo dkms status
Error! Could not locate dkms.conf file.
File: /var/lib/dkms/zfs/0.8.2/source/dkms.conf does not exist.

Clean up the DKMS folder:

#cd /var/lib/dkms/zfs/

#ls -al
# Move old libraries and old kernels to somewhere
0.8.2  <---- Move this to /tmp
0.8.3  <-- Keep
original_module
kernel-3.10.0-1062.9.1.el7.x86_64-x86_64 -> 0.8.3/3.10.0-1062.9.1.el7.x86_64/x86_64  <-- Move this to /tmp

Remove the old DKMS modules that are associated with old kernels:

sudo dkms remove zfs/0.8.2 --all;

Recompile the new DKMS module with the current kernel:

sudo dkms --force install zfs/0.8.3

Check your DKMS status again, it should be clean:

sudo dkms status
zfs, 0.8.3, 3.10.0-1062.12.1.el7.x86_64, x86_64: installed

If you see any old kernel that is associated with the new DKMS module, remove them, e.g.,

#sudo dkms status
zfs, 0.8.3, 3.10.0-1062.12.1.el7.x86_64, x86_64: installed (original_module exists)
zfs, 0.8.3, 3.10.0-1062.9.1.el7.x86_64, x86_64: built (original_module exists) (WARNING! Missing some built modules!) (WARNING! Missing some built modules!) (WARNING! Missing some built modules!) (WARNING! Missing some built modules!) (WARNING! Missing some built modules!) (WARNING! Missing some built modules!) (WARNING! Missing some built modules!) (WARNING! Missing some built modules!)
sudo dkms remove zfs/0.8.3 -k 3.10.0-1062.9.1.el7.x86_64

Now you may try to import your ZFS pool again. If it doesn't work, try to mount the ZFS pool in read only mode first, back up your data, rebuild the pool and restore it from backup.

Our sponsors:

ZFS – errors: Permanent errors have been detected in the following files:

I got the following messages today when I inspected my ZFS:

errors: Permanent errors have been detected in the following files:

        /mypool/data/file1.dat
        /mypool/data/file2.dat
        /mypool/data/file3.dat
        /mypool/data/file4.dat
        /mypool/data/file5.dat

As usual, the first thing I did was to scrub the entire pool, i.e.,

sudo zpool scrub mypool

Unfortunately, it didn’t work. The error still existed even there was no checksum error.

Keep in mind that this tutorial is about removing the error message, it is not about rescuing your files. ZFS is not 100% fail proof. It is highly recommended to have a backup copy if the data is important.

Since I don’t care about the file (backup copy), I decided to delete the files manually, and it ended up like this:

errors: Permanent errors have been detected in the following files:

        mypool/data:<0x1fa3a>
        mypool/data:<0x1fa45>
        mypool/data:<0x1fa46>
        mypool/data:<0x1f354>
        mypool/data:<0x1f664>

That’s because when the files were deleted, it simply removed the file pointers. Since ZFS no longer has the file names, it decided to report the location instead. To solve this problem, you will need to go through the following:

First, make sure that you have no checksum error and the pool is healthy, i.e., all hard drives are online, and all counts are zero.

sudo zpool status mypool

Next, try to scrub the pool again:

sudo zpool scrub mypool

Within a minute, try to stop the process:

sudo zpool scrub -s mypool

Check the status again. The error should be gone:

sudo zpool status -v
  pool: mypool
 state: ONLINE
  scan: scrub canceled on Sun Feb  3 12:18:06 2019
errors: No known data errors

If the error still presents, you may need to scrub the pool again.

Our sponsors:

CentOS 7 – dracut-initqueue timeout

I received a Christmas gift from the RHEL 7 / CentOS 7 / Linux kernel team today. After my system got updated to the new kernel (3.10.0-957.1.3.el7.x86_64), my system gave me few surprises. As a sys-admin, I don’t want to see any surprise. What I really want is a working system. That’s one of the reasons why I always suggest people to use FreeBSD if possible. FreeBSD is a truly rock solid system.

Long story short. You have a working system. Your system receive a new kernel (e.g., 3.10.0-957.1.3.el7.x86_64). You decide to boot to that new kernel. Your system takes forever to boot. You go down to the server room, turn on the monitor and see the following messages:

[time stamp]dracut-initqueue [289]: Warning: dracut-initqueue timeout - starting tmeout scripts
[time stamp]dracut-initqueue [289]: Warning: dracut-initqueue timeout - starting tmeout scripts
...
[time stamp]dracut-initqueue [289]: Warning: count not boot.
[time stamp]dracut-initqueue [289]: Warning: /dev/disk/by-uuid/XXX does not exists
   Starting Dracut Emergency Shell...
Warning: /dev/disk/by-uuid/XXX does not exists

Generating "/run/initramfs/rdsosreport.txt"

Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.

dracut:/#

In my situation, my system has no problem to boot into the older kernel. It just does not like the new kernel. In my case, I check my /etc/fstab settings. I disable all of the non-standard devices.

#The following are standard.
UUID=12347890-1234-9512-9518-963852710258       /                       xfs     defaults        0 0
UUID=12347890-1234-4513-7532-963852710258       /boot                   xfs     defaults        0 0
UUID=12347890-1234-9587-8526-963852710258       swap                    swap    defaults        0 0


#The new kernel does not like it. I have to comment it out.
#/storage/data/Dropbox.img                      /Dropbox/               ext4    defaults        0 0

That’s it. I simply mount the image after the system is booted and the problem is solved. This is done by mounting the image in a script and run it in /etc/crontab (@reboot)

The second surprise was even worse. In one of my Linux machines, the OS was installed in the USB drive (because all SATA ports have been used for raid storage). For some odd reasons, the new kernel cannot be booted because the system lives in a USB flash drive. So I tried to install a new CentOS 7 on the same hardware (the installation disk contains the latest kernel) and it is giving me the same results. I ended up installing the OS on a SATA hard drive, which is not what I want because my computer case does not have any extra space of another hard drive.

Sometimes, I think the Linux kernel / system engineers have way too much spare time.

Our sponsors:

“ZFS on Linux”: The ZFS modules are not loaded. Try running ‘/sbin/modprobe zfs’ as root to load them.

Last Updated: Oct 10, 2021

This article is based on my experience with CentOS 7. If you are running other Linux distributions, please adjust the commands and package names accordingly (e.g., yum –> apt-get).

As of Oct 3, 2019, I cannot get ZFS on Linux running on CentOS 8.

ZFS on Linux is a not robust solution to get ZFS up and running in Linux environments. Unlike FreeBSD, ZFS does not work with the Linux kernel natively. The developers of ZFS on Linux came up a rather crappy solution: By injecting the ZFS into the kernel via DKMS, Linux kernel will understand what is ZFS. It works very well, and it really works with a single assumption: The system will never get updated or rebooted after installing ZFS on Linux. So what will happen after you update the system (e.g., kernel, ZFS on Linux packages) and the system got rebooted? There is a good chance that your ZFS module will not be loaded:

Event What will happen after reboot? What do you need to do?
You update kernel first, then ZFS on Linux afterward
Before Dec 12, 2018: Your system will load the ZFS modules.
Dec 12, 2018 – Dec 2019: Probably not
After Jan 2020: 50/50
Remove the old kernels from DKMS database. Rebuild the ZFS (and SPL if running 0.7.x) modules with the new kernel in the DKMS database.
You update ZFS on Linux first, then kernel afterward If your system boots into the new kernel (which is default), your system WILL NOT load the ZFS modules. Remove and install the ZFS and DKMS packages. Remove the old kernels from DKMS database. Rebuild the SPL and ZFS modules with the new kernel in the DKMS database.
You update ZFS on Linux only. Kernel has not been updated. Your system will load the ZFS modules. Remove the old kernels from DKMS database. Rebuild the SPL and ZFS modules with the new kernel in the DKMS database.
You update kernel only. ZFS on Linux has not been updated. Your system will load the ZFS modules. Remove the old kernels from DKMS database. Rebuild the SPL and ZFS modules with the new kernel in the DKMS database.

There are two steps to rescue your data back. We will start with removing your DKMS module first. If it does not work, we will reinstall the ZFS packages. Also, I am assuming that your system is booted to the new kernel. Please keep in mind that ZFS on Linux does not work with Linux kernel v4 (as of Oct 3, 2019, either via kernel-ml via CentOS 7 or native v4 on CentOS 8). It only works with v3.

If you need to access your data, the easiest way is to boot to the old working kernel. Once you are ready to clean up the problem, boot to the new kernel and follow my instructions below.

Step 1: Clean up and Reinstall DKMS Modules

Most of the time, the ZFS on Linux messes up the DKMS modules after the update. I suggest to clean up and reinstall DKMS modules once again. As of December 12, 2018, the ZFS on Linux will remove all of the DKMS modules for no reason.

First, check your DKMS status. You will need to clean up the DKMS if it is empty (nothing is installed), orphan (library is installed, but none of them is attached to any kernel) or multiple (multiple kernels installed). If it is clean (single kernel only), you may skip this step. If you are using ZFS on Linux ver 0.7.x, your DKMS will contain two modules (zfs and spl). If you are using ver. 0.8.x, your DKMS will contain one module only (zfs).

#dkms status

In general, all you want is only one version of DKMS modoule is installed, and it is attached to one kernel only. If you see multiple versions of DKMS modules, or multiple kernels, that’s bad.

#An example of dirty DKMS status (This is bad):
spl, 0.7.12, 3.10.0-862.14.4.el7: installed (original_module exists) (WARNING! Diff between built and installed module!)
spl, 0.7.12, 3.10.0-957.1.3.el7: installed (original_module exists)
zfs, 0.7.12, 3.10.0-862.14.4.el7: installed (original_module exists) (WARNING! Diff between built and installed module!)
zfs, 0.7.12, 3.10.0-957.1.3.el7: installed (original_module exists)

#An example of empty DKMS status (This is bad):
(empty)

#An example of DKMS status without kernal (This is bad):
zfs, 0.7.12: added
spl, 0.7.12: added

#An example of clean DKMS status (This is good):
spl, 0.7.12, 3.10.0-957.1.3.el7.x86_64, x86_64: installed
zfs, 0.7.12, 3.10.0-957.1.3.el7.x86_64, x86_64: installed 

or 

spl, 0.7.12, 3.10.0-957.1.3.el7.x86_64, x86_64: installed (original_module exists)
zfs, 0.7.12, 3.10.0-957.1.3.el7.x86_64, x86_64: installed (original_module exists)

or 

zfs, 0.8.3, 3.10.0-1127.el7.x86_64, x86_64: installed (original_module exists)

In my example above, my ZFS on Linux is 0.7.12, my old kernel is 3.10.0-862.14.4.el7, my new kernel is 3.10.0-957.1.3.el7. Your version may be different.

If your situation is something like the following:

Error! Could not locate dkms.conf file.
File: /var/lib/dkms/zfs/0.8.2/source/dkms.conf does not exist.

That means you have multiple versions of dkms-ZFS modules installed in your system. In my case, the 0.8.3 is running, and the old (0.8.2) is still available. Check the folder (/var/lib/dkms/zfs/) to see if any old libraries need to be removed.

#Currently running: dkms ZFS 0.8.3, kernel 3.10.0-1062.18.1.el7.x86_64

cd /var/lib/dkms/zfs/

#ls -al
total 12K
0.8.2 <---- Delete this 0.8.3 kernel-3.10.0-1062.1.2.el7.x86_64-x86_64 -> 0.8.2/3.10.0-1062.1.2.el7.x86_64/x86_64 <---- Delete this kernel-3.10.0-1062.4.1.el7.x86_64-x86_64 -> 0.8.2/3.10.0-1062.4.1.el7.x86_64/x86_64 <---- Delete this kernel-3.10.0-1062.4.3.el7.x86_64-x86_64 -> 0.8.2/3.10.0-1062.4.3.el7.x86_64/x86_64 <---- Delete this kernel-3.10.0-1062.7.1.el7.x86_64-x86_64 -> 0.8.2/3.10.0-1062.7.1.el7.x86_64/x86_64 <---- Delete this kernel-3.10.0-1062.9.1.el7.x86_64-x86_64 -> 0.8.3/3.10.0-1062.9.1.el7.x86_64/x86_64 <---- Delete this

You may want to remove both ZFS and SPL DKMS modules first, then reinstall them:

#If your version is 0.7.x:
sudo dkms remove zfs/0.7.12 --all; 
sudo dkms remove spl/0.7.12 --all; 


#If your version is 0.8.x:
sudo dkms remove zfs/0.8.3 --all; 

Sometimes, you will need to remove the old kernel manually:

sudo dkms remove zfs/0.7.12 -k 3.10.0-862.14.4.el7.x86_64; 
sudo dkms remove spl/0.7.12 -k 3.10.0-862.14.4.el7.x86_64;

Time to reinstall them:

#Don't forget to use the version that matches your system. In my situation, it was 0.7.12 / 0.8.3

#0.7.x:
sudo dkms --force install spl/0.7.12; 
sudo dkms --force install zfs/0.7.12;

#0.8.x:
sudo dkms --force install zfs/0.8.3;

Run the DKMS status again. You should see both ZPL and SPL are attached to the new kernel:

#If your version is 0.7.x:
spl, 0.7.12, 3.10.0-957.1.3.el7.x86_64, x86_64: installed
zfs, 0.7.12, 3.10.0-957.1.3.el7.x86_64, x86_64: installed

#If your version is 0.8.x:
zfs, 0.8.3, 3.10.0-1127.el7.x86_64, x86_64: installed

Try to load the ZFS module and import your ZFS data:

sudo /sbin/modprobe zfs
sudo zpool import -a

If everything looks good, you can reboot your system and test to see if the ZFS is loaded automatically or not. Once everything is okay, remove the old kernel from the system.

sudo package-cleanup --oldkernels --count=1 -y

That’s it, you are good to go.


Step 2: Reinstall ZFS packages

If you have tried the first step and it didn’t work. You may want to reinstall the ZFS packages. Here is a typical error message:

You try to import the ZFS data and the system complains:

#zpool import -a
The ZFS modules are not loaded.
Try running '/sbin/modprobe zfs' as root to load them.

So you try to load the ZFS module and the system complains again:

#/sbin/modprobe zfs
modprobe: FATAL: Module zfs not found.
or
modprobe: ERROR: could not insert 'zfs': Invalid argument

What you need to do is to erase all the ZFS and related packages:

yum erase zfs zfs-dkms libzfs2 spl spl-dkms libzpool2 -y

Please reboot the system. This step is very important.

reboot

After that, try to install ZFS again.

yum install zfs -y

If the system complaints about mismatched dependent packages, try to remove the affected packages first and run the installation again.

After the installation, try to start the ZFS module:

/sbin/modprobe zfs
zpool import -a

If the ZFS is up and running, please clean up your DKMS from step 1. If it complains again, please follow the steps below:

  1. Reboot
  2. Clear the cache of the yum repository and try to update the system again. (sudo yum clean all)
  3. Reboot to the latest kernel
  4. Erase the ZFS and related packages, try it again.

Keep in mind that ZFS on Linux is based on DKMS, a very buggy and unreliable platform. Sometimes when you uninstall and install the packages, don’t expect that it will do the same thing as fresh install. Before you send your server to the landfill, try this:

Check the dkms status:

#dkms status
#version 0.7.x
zfs, 0.7.2: added
spl, 0.7.2: added

#version 0.8.x
zfs, 0.8.3: added

If you see this message, that means the ZFS packages have been installed, but the DKMS doesn’t know how to use it. You will need to tell DKMS about it:

#version 0.7.x
dkms --force install zfs/0.7.2
dkms --force install spl/0.7.2

#version 0.8.x
dkms --force install zfs/0.8.3
#Try to start ZFS again.
/sbin/modprobe zfs
zpool import -a

If you already tried it for more than 3 times without any luck, don’t waste your time. You may want to bring the ZFS disks to a different server. The new server should be able to recognize the ZFS disks. For the original server, you can connect to the ZFS disks on the new server via NFS using the original path. That will minimize the impact of changes.

Keep in mind that the ZFS version is very important. The server with newer ZFS version can read the ZFS disks created in older ZFS versions. You can always check the ZFS versions by running the following:

#Get the version of the host:
sudo zfs upgrade -v
sudo zpool upgrade -v


#Get the version of the ZFS disks:
sudo zfs get version
sudo zpool get version

This is pretty much what I need to do on my 60 servers every month. If you are in a similar situation like mine, I guarantee that you will become an expert of fixing this kind of mess after few months. Good luck!

Our sponsors:

CentOS/RHEL 7 – No ZFS After Updating the Kernel

Let’s all agree with this fact: ZFS is foreign to Linux. It is not native. You can’t expect that ZFS on Linux will run smoothly as FreeBSD or Solaris. Having using ZFS on Linux since 2013 (and ZFS on FreeBSD since 2009), I’ve noticed that ZFS does not like Linux (well, at least RHEL 7). Here are some few examples:

  • ZFS is not loaded at the boot time. You will need to manually start it or load it via cron. Good luck if you have other services (like Apache, MySQL, NFS, or even users’ home directories) that depend on the ZFS.
  • Every single time you update the kernel, ZFS will not work after the reboot without some manual work. What if the system runs the update automatically, and one day there is a power failure which makes your server to reboot to a new kernel? Your system will not be able to mount your ZFS volume. If you integrate ZFS with other service applications such as web, database or network drive, oh well, good luck and I hope you will catch this problem fast enough before receiving thousands of emails and calls from your end-users.
  • If you exclude the kernel from the updates (/etc/yum.conf), you will eventually run into trouble, because there are tons of other packages that require the latest kernel. In the other words, running the command: yum update -y will fail. You will need to run yum update –skip-broken, which means you will miss many latest packages. Here is an example:
    --> Finished Dependency Resolution
    Error: Package: hypervvssd-0-0.29.20160216git.el7.x86_64 (base)
               Requires: kernel >= 3.10.0-384.el7
               Installed: kernel-3.10.0-327.el7.x86_64 (@anaconda)
                   kernel = 3.10.0-327.el7
               Installed: kernel-3.10.0-327.22.2.el7.x86_64 (@updates)
                   kernel = 3.10.0-327.22.2.el7
    Error: Package: hypervfcopyd-0-0.29.20160216git.el7.x86_64 (base)
               Requires: kernel >= 3.10.0-384.el7
               Installed: kernel-3.10.0-327.el7.x86_64 (@anaconda)
                   kernel = 3.10.0-327.el7
               Installed: kernel-3.10.0-327.22.2.el7.x86_64 (@updates)
                   kernel = 3.10.0-327.22.2.el7
    Error: Package: hypervkvpd-0-0.29.20160216git.el7.x86_64 (base)
               Requires: kernel >= 3.10.0-384.el7
               Installed: kernel-3.10.0-327.el7.x86_64 (@anaconda)
                   kernel = 3.10.0-327.el7
               Installed: kernel-3.10.0-327.22.2.el7.x86_64 (@updates)
                   kernel = 3.10.0-327.22.2.el7
     You could try using --skip-broken to work around the problem
     You could try running: rpm -Va --nofiles --nodigest
    
  • If you are running the stable Linux distributions like RHEL 7, you can load a more recent kernel like 4.x by installing the package: kernel-ml. However, don’t expect that ZFS will work with version 4:
    Loading new spl-0.6.5.9 DKMS files...
    Building for 4.11.2-1.el7.elrepo.x86_64
    Building initial module for 4.11.2-1.el7.elrepo.x86_64
    configure: error: unknown
    Error! Bad return status for module build on kernel: 4.11.2-1.el7.elrepo.x86_64 (x86_64)
    Consult /var/lib/dkms/spl/0.6.5.9/build/make.log for more information.
    
    

Running ZFS on Linux is like putting a giraffe in the wild in Alaska. It is just not the right thing to do. Unfortunately, there are so many things that only available on Linux so we have to live with it. Just like FUSE (Filesystem in Userspace), many people feel hesitated to run their file systems on the userspace instead of kernel level, but hey, see how many people are happy with GlusterFS, a distributed file system that live on FUSE! Personally I just think it is not a right thing to do, especially in an enterprise environment. Running a production file system at the userspace level, seriously?

Anyway, if you are running into trouble after upgrading your Linux kernel (and you almost had a heart attack when you think your data may be lost), you have two choices:

  1. Simply boot to the previous working kernel if you need to get your data back in quick. However, keep in mind that this will create two problems:
    • Since you already update the system with the new kernel and the new packages, your new packages probably will not work with the old kernel, and that may give you extra headache.
    • Unless you manually overwrite the kernel boot order (boot loader config), otherwise you may get into the same trouble in the next boot.
  2. If you want a more “permanent” fix, you will need to rebuild the dkms ZFS and SPL modules. See below for the instructions. Keep in mind that you will have the same problem again when the kernel receives a new update.

You’ve tried to load the ZFS and realize that it is no longer available:

#sudo zpool import
The ZFS modules are not loaded.
Try running '/sbin/modprobe zfs' as root to load them.

#sudo /sbin/modprobe zfs
modprobe: FATAL: Module zfs not found.

You may want to check the dkms status. Write down the version number. In my case, it is 0.6.5.9

#sudo dkms status
spl, 0.6.5.9, 3.10.0-327.28.3.el7.x86_64, x86_64: installed (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!)
spl, 0.6.5.9, 3.10.0-514.2.2.el7.x86_64, x86_64: installed (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!)
zfs, 0.6.5.9, 3.10.0-327.28.3.el7.x86_64, x86_64: installed (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!)

Before running the following commands, make sure that you know what you are doing.


#Make sure that you reboot to the kernel you want to fix.
#Find out what is the current kernel
uname -a
Linux 3.10.0-514.2.2.el7.x86_64 #1 SMP Tue Dec 6 23:06:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

#In my example, it is:
3.10.0-514.2.2.el7.x86_64

#Now, let's get into the fun part. We will remove them and reinstall them.
#Don't forget to match your version, in my base, my version is: 0.6.5.9
sudo dkms remove zfs/0.6.5.9 --all
sudo dkms remove spl/0.6.5.9 --all
sudo dkms --force install spl/0.6.5.9
sudo dkms --force install zfs/0.6.5.9

#or you can run these commands in one line, so that you don't need to wait:
sudo dkms remove zfs/0.6.5.9 --all; sudo dkms remove spl/0.6.5.9 --all; sudo dkms --force install spl/0.6.5.9; sudo dkms --force install zfs/0.6.5.9;

And we will verify the result.

#sudo dkms status
spl, 0.6.5.9, 3.10.0-514.2.2.el7.x86_64, x86_64: installed
zfs, 0.6.5.9, 3.10.0-514.2.2.el7.x86_64, x86_64: installed

Finally we can start the ZFS again.

sudo /sbin/modprobe zfs

Your ZFS pool should back. You can verify it by rebooting your machine. Notice that Linux may not automatically mount the ZFS volumes. You may want to mount it manually or via cron job.

Here is how to mount the ZFS volumes manually.

sudo zpool import -a

You may want to remove all of the old kernels too.

sudo package-cleanup --oldkernels --count=1 -y

Our sponsors:

[Linux]/dev/sdb1: more filesystems detected. This should not happen,

I had a hard drive sitting around, and I decided to format it such that I could use it in my Linux CentOS box. When I decided to mount it, I got the following error message:

mount: /dev/sdb1: more filesystems detected. This should not happen,
       use -t  to explicitly specify the filesystem type or
       use wipefs(8) to clean up the device.

This message simply tells you that there are two or more file systems sitting in the hard drive partitions, and the system does not know which one to use to mount. We can take a closer look to see what’s going on:

sudo wipefs /dev/sdb1


offset               type
----------------------------------------------------------------
0x2d1b0fa8923        zfs_member   [raid]
                     LABEL: storage
                     UUID:  12661834248699203227

0x951                xfs   [filesystem]
                     UUID:  90295123-2395-7456-8521-9A1EE963ac53

As you can see, we have two file systems here. The easiest way is to wipe out the first few sectors of your disk, i.e.,

sudo dd if=/dev/zero of=/dev/sdb bs=1M count=10

And we will re-do everything again, i.e.,

sudo parted /dev/sdb1
...
...
sudo mkfs.xfs /dev/sdb1
sudo mount /dev/sdb1 /mnt/

That’s it!

Our sponsors:

modprobe: ERROR: could not insert ‘zfs’: Required key not available

Today I was trying to install ZFS on a CentOS 7 box. Typically rebooting the computer, the ZFS mododule will be turned on. However, it didn’t turn on in my case.

Failed to load ZFS module stack.
Load the module manually by running 'insmod /zfs.ko' as root.

So I tried to turn on the module:

#sudo modprobe zfs
modprobe: ERROR: could not insert 'zfs': Required key not available.

Turn out this is a newer machine with UEFI available. It has something to do with the secure boot. After I reboot the machine and log in to the BIOS menu, turn on the secure boot feature, and everything is working again.

Have fun with ZFS.

Our sponsors:

Dropbox on FreeBSD

I put my personal websites on a FreeBSD server. One of my websites is a photo album, which I want to read the content from a Dropbox. That Dropbox primarily runs on Mac, iPhone and iPad. I was trying to explore the possibilities to set up a Dropbox on FreeBSD. Since Dropbox doesn’t support FreeBSD officially, I need to use 3rd party tools, most of them are basically based on the Dropbox developer API.

So I have tried several 3rd party tools, as you expect, none of them works. The primary problem is the synchronization, i.e., if my wife adds or deletes a photo on the Dropbox, I expect that the Dropbox folder on FreeBSD will get updated as well. Another problem is the speed. Looks like the Dropbox API is not as fast comparing to its own native application. On the same network, it took few hours to download the content (around 1GB of jpeg files) from Dropbox on FreeBSD, versus 10 minutes on a Mac/Windows/Linux machine using the native application.

So I came up few alternative solutions:

  1. Hosting my website on CentOS Linux. Since Dropbox supports Linux, I can easily read the Dropbox without any problem.
  2. Push the Dropbox content from Mac/Linux to FreeBSD using Rsync periodically (e.g., every 5 mins, hourly etc). That way FreeBSD will have access the Dropbox files.
  3. Set up a NFS service on a Linux box with access to Dropbox, and let the FreeBSD to mount the corresponding NFS share. This solution is okay if both machines are on the same network. It may raise some security concerns if both machines are connected via the public.

Another solution I think it may work is to install the Dropbox native application on FreeBSD. FreeBSD supports running Linux application via Linux emulation. Back in the old days (FreeBSD 8), it was pretty easy to include the Linux support on FreeBSD (one click in the sysinstall). Since the recent releases, they’ve made it harder because not many people wants to run Linux binary on FreeBSD. Based on my previous experience, I think it should work on the latest FreeBSD, but it may require some works.

Another crazy idea will be running Dropbox with Wine on FreeBSD. But this goes way too far from my original purpose, and I am not a big fan of Wine because it adds too many libraries to the system.

Our sponsors: