Amazon EC2 VS Google Cloud Platform: Storage Speed Comparison

We’ve owned multiple cloud instances on both Amazon ECS and Google Cloud Platform. I always wonder what is the difference between them. So I decide to perform a very simple speed comparisons. All storage/disks are attached on RHEL Linux instance and formatted to XFS. Everything are using the default settings. Here are the commands I used:

#Dumping 1GB of data:
dd if=/dev/zero of=file.out bs=1M count=1000

#Dumping 10GB of data:
dd if=/dev/zero of=file.out bs=1M count=10000

Here are the results:

File Size/Storage Type
Amazon: General Purpose SSD
Amazon: Magnetic
Amazon: Throughput Optimized HDD
Google: Persistent Disk
Google: Local SSD

1GB
150 MB/s
40.8 MB/s
78.2 MB/s
1.30 GB/s
1.20 GB/s

10GB
68.4 MB/s
31.0 MB/s
68.0 MB/s
62.4 MB/s
338 MB/s

Clearly, Google Cloud is the winner in terms of both pricing and performance.

Our sponsors:

[VirtualBox]CentOS 7: NS_ERROR_FAILURE

After I reboot one of my VirtualBox host servers today, I was unable to start the virtual box guests. The error was a popular one: NS_ERROR_FAILURE.

The problem was caused by the kernel mismatch problem. All you need is to rebuild the virtual box library to match with your system kernel. In my case, I had the following:

#This is my Virtual Box version
6.0.16


#This is my Linux kernel:
uname -a
3.10.0-1062.12.1.el7.x86_64


#This is my virtual box modules version:
modinfo vboxdrv
filename:       /lib/modules/3.10.0-514.10.2.el7.x86_64/weak-updates/vboxdrv.ko.xz
version:        5.0.40 r115130 (0x00240000)
license:        GPL
description:    Oracle VM VirtualBox Support Driver
author:         Oracle Corporation
retpoline:      Y
rhelversion:    7.6
srcversion:     3AFDBBC6FDA2CE8CF253D33
depends:
vermagic:       3.10.0-957.1.3.el7.x86_64 SMP mod_unload modversions
parm:           force_async_tsc:force the asynchronous TSC mode (int)

As you can see, the Virtual Box kernel is loaded from a wrong kernel location. Also the Virtual Box is 5.0.40 instead of 6.0.16. In my case, all I need is to rebuild the virtual box library to make it compatible with the Linux kernel. In order to do it, you will need to do the following:

  1. Remove all the old Linux kernels
  2. Remove the Virtual Box modules.
  3. Uninstall the Virtual Box
  4. Reboot
  5. Install the Virtual Box
#Remove all of the old kernels:
sudo package-cleanup --oldkernels --count=1 -y; 


#Remove all except your current modules:
cd /lib/modules/


#Uninstall the Virtual Box
sudo yum remove VirtualBox-6.0


#Reboot
sudo reboot


#Install the Virtual Box
sudo yum install -y VirtualBox-6.0


#Install the Extension Pack (The version number may be different in your case)
wget --no-check-certificate https://download.virtualbox.org/virtualbox/6.0.16/Oracle_VM_VirtualBox_Extension_Pack-6.0.16.vbox-extpack
sudo VBoxManage extpack install --replace Oracle_VM_VirtualBox_Extension_Pack-6.0.16.vbox-extpack


#Start the Virtual Box again

That’s it! Hope it helps!

Our sponsors:

[ZFS On Linux Trouble] This pool uses the following feature(s) not supported by this system…All unsupported features are only required for writing to the pool, zpool create: invalid argument for this pool operation

When I rebooted my computer and loaded my ZFS pool today, I got this error message:

#sudo zpool import -a
This pool uses the following feature(s) not supported by this system:
        org.zfsonlinux:project_quota (space/object accounting based on project ID.)
        com.delphix:spacemap_v2 (Space maps representing large segments are more efficient.)
All unsupported features are only required for writing to the pool.
The pool can be imported using '-o readonly=on'.
cannot import 'my_zpool': unsupported version or feature

On my another machine, I also saw something similar when I tried to create a new pool:

zpool create: invalid argument for this pool operation

This kind of error usually happens when you move your ZFS pool from one system to the other. For example, if your ZFS pool was created in ZFS v10, and you move it to a new system that can only handle ZFS v9, then this error message will show up. Obviously, this is simply not true in my case (and yours too). My system showed me this message after rebooting the server. It had nothing to do with moving the ZFS pool from one to the other. In short, this message is misleading, however it gave me some idea of what was going wrong.

Long story short. It is a known bug of the ZFS on Linux. This kind of problem happens when your Linux kernel is updated every time. If you want to get this resolved, you can only do two things. Never update your system kernel, or never reboot your server (so that the new kernel will not be loaded). If you can’t do any of these, then ZFS on Linux is not for you.

If you need to access your data now, you can mount it as read only, although this is not a long term solution:

sudo zpool import my_zpool -o readonly=on

Another way is to reboot your server to the older working kernel, assuming your old kernel is still available in your system.

So here is the reason why your system could not open your ZFS pool:

  1. You are running Linux kernel ver A and ZFS on Linux ver X, and your system is happy.
  2. A new kernel is release (e.g., ver B). Your system download it and the kernel is sitting under /boot
  3. Later, a new ZFS on Linux (e.g., ver Y) is available. In theory, when upgrading the ZFS on Linux, it supposes to compile the DKMS code with each kernel in the system. In the other words, your current kernel (ver A) and the new pending kernel (ver B) should know how to use the ZFS on Linux (both ver X and Y). Notice that I am using the word: “In theory”. And you probably know that things are not ideal in reality.
  4. So when your system is booted into the new kernel, for some reasons, your new kernel does not have the skill (ZFS on Linux ver Y) to open your ZFS pool, therefore you see that error message.

Here is how to solve the problem. Reinstalling the ZFS and DKMS packages is not going to solve the problem. You will need to rebuild the DKMS modules with your new kernel. First, reboot your computer to the latest kernel first. Here are my versions. Your versions may be different.

Old Kernel: 3.10.0-1062.9.1.el7.x86_64
New Kernel: 3.10.0-1062.12.1.el7.x86_64

Old DKMS ZFS Module:   0.8.2
New DKMS ZFS Module:   0.8.3

Remove your old kernels.

sudo package-cleanup --oldkernels --count=1 -y

Check your current DKMS status. It should contain some error:

sudo dkms status
Error! Could not locate dkms.conf file.
File: /var/lib/dkms/zfs/0.8.2/source/dkms.conf does not exist.

Clean up the DKMS folder:

#cd /var/lib/dkms/zfs/

#ls -al
# Move old libraries and old kernels to somewhere
0.8.2  <---- Move this to /tmp
0.8.3  <-- Keep
original_module
kernel-3.10.0-1062.9.1.el7.x86_64-x86_64 -> 0.8.3/3.10.0-1062.9.1.el7.x86_64/x86_64  <-- Move this to /tmp

Remove the old DKMS modules that are associated with old kernels:

sudo dkms remove zfs/0.8.2 --all;

Recompile the new DKMS module with the current kernel:

sudo dkms --force install zfs/0.8.3

Check your DKMS status again, it should be clean:

sudo dkms status
zfs, 0.8.3, 3.10.0-1062.12.1.el7.x86_64, x86_64: installed

If you see any old kernel that is associated with the new DKMS module, remove them, e.g.,

#sudo dkms status
zfs, 0.8.3, 3.10.0-1062.12.1.el7.x86_64, x86_64: installed (original_module exists)
zfs, 0.8.3, 3.10.0-1062.9.1.el7.x86_64, x86_64: built (original_module exists) (WARNING! Missing some built modules!) (WARNING! Missing some built modules!) (WARNING! Missing some built modules!) (WARNING! Missing some built modules!) (WARNING! Missing some built modules!) (WARNING! Missing some built modules!) (WARNING! Missing some built modules!) (WARNING! Missing some built modules!)
sudo dkms remove zfs/0.8.3 -k 3.10.0-1062.9.1.el7.x86_64

Now you may try to import your ZFS pool again. If it doesn't work, try to mount the ZFS pool in read only mode first, back up your data, rebuild the pool and restore it from backup.

Our sponsors:

[ZFS On Linux] What to Do When Resilver Takes Very Long

If you check your Zpool health status and you notice an error like the following:

sudo zpool status
  pool: myzpool
 state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
        Sufficient replicas exist for the pool to continue functioning in a
        degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
        repaired.
  scan: resilvered 49.6G in 0 days 00:11:25 with 0 errors on Fri Jan 10 15:52:05 2020
config:

        NAME                                            STATE     READ WRITE CKSUM
        myzpool                                         DEGRADED     0     0     0
          raidz1-0                                      DEGRADED     0     0     0
            ata-TOSHIBA_0001                            ONLINE       0     0     0
            ata-TOSHIBA_0002                            ONLINE       0     0     0
            ata-TOSHIBA_0003                            FAULTED     36     0     0  too many errors
        cache
          nvme0n1                                       ONLINE       0     0     0

There are two possibilities: hardware error or software error. I will perform the following to identify whether it is hardware or software error.

  1. Check whether the disk is missing from the system or not. You can do it by running fdisk -l. If the disk is available, try to clear the ZFS status. If the disk is missing, try to reboot the system.
  2. If the disk is still missing after reboot the system, try to replace the hard drive cable.
  3. Once the ZFS sees all disks, try to run zpool clear myzpool. This will force the ZFS to resilver the pool. If the pool is running at 100MB/s or above, it sounds like a false alarm. You may stop here.

Assuming that it is hardware related error. Typically you can do the following:

  • Replace the SATA / SAS cable
  • Replace the hard drive
  • In the BIOS settings, change the write mode from SATA/IDE to AHCI

If you replace the hard drive, you will need to resilver the pool. If it is hardware error, the pool will read/write the data at least 100MB/s. Depending on the size of data on your faulty disk, it should take no more than 3 days to finish the entire process. Wait until the process is finished. If it gives no error, you may stop here.

sudo zpool status
  pool: myzpool
 state: ONLINE
  scan: resilvered 215M in 0 days 00:00:04 with 0 errors on Mon Jan 13 18:24:48 2020
config:

        NAME                                            STATE     READ WRITE CKSUM
        myzpool                                         ONLINE       0     0     0
          raidz1-0                                      ONLINE       0     0     0
            ata-TOSHIBA_0001                            ONLINE       0     0     0
            ata-TOSHIBA_0002                            ONLINE       0     0     0
            ata-TOSHIBA_0003                            ONLINE       0     0     0
        cache
          nvme0n1                                       ONLINE       0     0     0

errors: No known data errors

So you may be in one or more of the following situations:

  • Your hardware is consumer level, e.g., the motherboard is not server grade, or the hard drive is designed for general purposes rather than 24/7.
  • You have replaced the hard drive, and the resilver process is very slow (e.g., 5-15MB/s). The ZFS cannot even give you an estimated finish time.
  • The resilver estimated end time keeps being delayed, and it seems taking forever. For example, suppose ZFS estimates that the entire process may take 10 hours to finish. After 5 hours, it says 9 more hours to go, or once it reaches 99.9%, it starts the entire process again.
  • When you run dmesg, you see a lot of hardware related error, e.g.,
    ata2.00: status: { DRDY }
    ata2.00: failed command: READ FPDMA QUEUED
    ata2.00: cmd 60/78:e0:f0:70:77/00:00:39:00:00/40 tag 28 ncq 61440 in
    sd 1:0:0:0: [sdb] tag#25 FAILED Result: hostbyte=DID_OK driverbyte=DRIVER_SENSE
    sd 1:0:0:0: [sdb] tag#25 Sense Key : Illegal Request [current] [descriptor]
    sd 1:0:0:0: [sdb] tag#25 Add. Sense: Unaligned write command
    
  • ZFS shows multiple disks are faulty. However their states become online after rebooting the system.

If you are in any of these situations, there are multiple things you can do:

If this is an important server and you don’t have any backup data, just wait. There is nothing you can do.

If you have backup data, try to destroy the pool and rebuild again. The problem is that your Linux or ZFS does not like the current configurations. It is software rather than hardware issue. By rebuilding the entire zpool, everything will start over again. This will save you many days, weeks or even months of waiting time. In my situation, I had spent 2 months to resilver the data on my secondary backup server. After letting it to resilver for 2 months, I decided to rebuild the entire ZFS pool (using exact the same hardware) and loaded the data from my production server. It took less than a week to fill 50TB of data and the dmesg is clear of error message.

Of course, sometimes the hard drive is faulty. We can perform a simple test with the following commands. First, try to link it to another server (or USB enclosure) and run the following (replace sdX with the actual hard drive identifier):

sudo smartctl -a /dev/sdX | grep result

#Bad hard drive
SMART overall-health self-assessment test result: FAILED!

#Good hard drive
SMART overall-health self-assessment test result: PASSED

Next, we perform a more intensive test. This will involve wiping your entire hard drive (writing data to every sector):

nohup sudo dd if=/dev/zero of=/dev/sdX bs=1M status=progress > dd.log &

#This may take few days. You can check the progress this way:
sudo tail dd.log

Once the process is done, try to run dmesg | grep sdX. If the hard drive is faulty, you will definitely see lots of error messages. In my case, pretty much all of the hard drives give no error. What does it mean? It means the ZFS system doesn’t like those hard drives. All I can say is that my ZFS is up and running (and error free) after rebuilding the entire pool, using exact the same hard drives and cables.

If you have tried this for multiple times but no luck, there is another thing you can try before dumping your hard drives: Switch to FreeBSD.

I had a CentOS 7 server and I was having exact the same situations. I’ve wiped the disks and rebuilt the pool, and I couldn’t make the error go away. So I decided to switch to FreeBSD 12 (as of April, 2020), and I rebuilt the pool using exact the same specifications, and filled the pool with data. There was no error and the operation was extremely smooth.

Our sponsors:

[ZFS On Linux]How to Update Linux Kernel without Rebooting the System

As of Jan 2020, I manage 65 Linux + ZFS servers. Normally, I prefer to reboot each server after updating its kernel (according to Ret Hat, most updates are related to security fix). Without ZFS, it is not a big issue because rebooting a basic Linux server takes about 30 seconds. However with ZFS, it can take more than 60 seconds if the ZFS dataset is large (It takes time to unload and load the ZFS configurations). So I decide to experiment a new idea: Updating the kernel without rebooting the server. Keep in mind that this is not magic. This method will still introduce downtime, but it is much shorter comparing to rebooting the server. Base on my experience, it cuts about half of the downtime.

Before you try it on a production server, I highly recommend you to try it on a test server/VM first. If your server is a VM host, please be aware of the VM guests may get shut down after upgrade. You will need to wait the system to rebuild the VM modules with the new kernel headers first, then restart the VM guests.

We will use kexec:

sudo yum install kexec-tools -y

Update the kernel, ZFS and DKMS modules

sudo yum update -y

Assuming that you are running an older kernel:

uname -a
3.10.0-1062.4.1.el7.x86_64

If you open /boot/, you will notice that there are many newer kernels available:

ls -al /boot/  | grep x86_64
-rw-r--r--   1 root root 150K Oct 18 12:19 config-3.10.0-1062.4.1.el7.x86_64
-rw-r--r--   1 root root 150K Nov 13 18:02 config-3.10.0-1062.4.3.el7.x86_64
-rw-r--r--   1 root root 150K Dec  2 11:37 config-3.10.0-1062.7.1.el7.x86_64
-rw-r--r--   1 root root 150K Dec  6 09:53 config-3.10.0-1062.9.1.el7.x86_64
-rw-------   1 root root  30M Dec 13 00:03 initramfs-3.10.0-1062.4.1.el7.x86_64.img
-rw-------   1 root root  13M Oct 22 15:41 initramfs-3.10.0-1062.4.1.el7.x86_64kdump.img
-rw-------   1 root root  30M Nov 16 00:07 initramfs-3.10.0-1062.4.3.el7.x86_64.img
-rw-------   1 root root  30M Dec  4 00:10 initramfs-3.10.0-1062.7.1.el7.x86_64.img
-rw-------   1 root root  30M Dec  7 00:14 initramfs-3.10.0-1062.9.1.el7.x86_64.img
-rw-r--r--   1 root root 312K Oct 18 12:19 symvers-3.10.0-1062.4.1.el7.x86_64.gz
-rw-r--r--   1 root root 312K Nov 13 18:03 symvers-3.10.0-1062.4.3.el7.x86_64.gz
-rw-r--r--   1 root root 312K Dec  2 11:37 symvers-3.10.0-1062.7.1.el7.x86_64.gz
-rw-r--r--   1 root root 312K Dec  6 09:53 symvers-3.10.0-1062.9.1.el7.x86_64.gz
-rw-------   1 root root 3.5M Oct 18 12:19 System.map-3.10.0-1062.4.1.el7.x86_64
-rw-------   1 root root 3.5M Nov 13 18:02 System.map-3.10.0-1062.4.3.el7.x86_64
-rw-------   1 root root 3.5M Dec  2 11:37 System.map-3.10.0-1062.7.1.el7.x86_64
-rw-------   1 root root 3.5M Dec  6 09:53 System.map-3.10.0-1062.9.1.el7.x86_64
-rwxr-xr-x   1 root root 6.5M Oct 18 12:19 vmlinuz-3.10.0-1062.4.1.el7.x86_64
-rw-r--r--   1 root root  171 Oct 18 12:19 .vmlinuz-3.10.0-1062.4.1.el7.x86_64.hmac
-rwxr-xr-x   1 root root 6.5M Nov 13 18:02 vmlinuz-3.10.0-1062.4.3.el7.x86_64
-rw-r--r--   1 root root  171 Nov 13 18:02 .vmlinuz-3.10.0-1062.4.3.el7.x86_64.hmac
-rwxr-xr-x   1 root root 6.5M Dec  2 11:37 vmlinuz-3.10.0-1062.7.1.el7.x86_64
-rw-r--r--   1 root root  171 Dec  2 11:37 .vmlinuz-3.10.0-1062.7.1.el7.x86_64.hmac
-rwxr-xr-x   1 root root 6.5M Dec  6 09:53 vmlinuz-3.10.0-1062.9.1.el7.x86_64
-rw-r--r--   1 root root  171 Dec  6 09:53 .vmlinuz-3.10.0-1062.9.1.el7.x86_64.hmac

Pick the newest one. In the other words, we will do the following:

From: 3.10.0-1062.4.1.el7.x86_64
To: 3.10.0-1062.9.1.el7.x86_64

Before we begin, we want to make sure that all of the ZFS / dkms modules have been installed. Make sure that the latest one (3.10.0-1062.9.1.el7) is available:

sudo dkms status
zfs, 0.8.2, 3.10.0-1062.4.1.el7.x86_64, x86_64: installed
zfs, 0.8.2, 3.10.0-1062.4.3.el7.x86_64, x86_64: installed
zfs, 0.8.2, 3.10.0-1062.7.1.el7.x86_64, x86_64: installed
zfs, 0.8.2, 3.10.0-1062.9.1.el7.x86_64, x86_64: installed

Keep in mind that my current system is still running the old kernel (3.10.0-1062.4.1.el7.x86_64):

uname -a
3.10.0-1062.4.1.el7.x86_64

modinfo zfs | grep version
version:        0.8.2-1
rhelversion:    7.7
srcversion:     29C160FF878154256C93164
vermagic:       3.10.0-1062.4.1.el7.x86_64 SMP mod_unload modversions

Now, we will use kexec to load the new kernel. Please replace the kernel version with the latest one in your system.

sudo kexec -u
sudo kexec -l /boot/vmlinuz-3.10.0-1062.9.1.el7.x86_64 --initrd=/boot/initramfs-3.10.0-1062.9.1.el7.x86_64.img  --reuse-cmdline

After running the following command, it will introduce downtime. Based on my experience, it should be no longer than 30 seconds. However, I recommend you to test it using a non-production server first.

sudo systemctl kexec

During the update, your remote session may be ended. After waiting for 15-30s, try to connect to server again.

Verify the kernel has been updated:

uname -a
3.10.0-1062.9.1.el7.x86_64

modinfo zfs | grep version
version:        0.8.2-1
rhelversion:    7.7
srcversion:     29C160FF878154256C93164
vermagic:       3.10.0-1062.9.1.el7.x86_64 SMP mod_unload modversions

Clean up the old kernels:

sudo package-cleanup --oldkernels --count=1 -y; 
sudo dkms remove zfs/0.8.2 -k 3.10.0-1062.4.1.el7.x86_64;
sudo dkms remove zfs/0.8.2 -k 3.10.0-1062.4.3.el7.x86_64;
sudo dkms remove zfs/0.8.2 -k 3.10.0-1062.7.1.el7.x86_64;
sudo dkms status;

Now your system is good to go.

Our sponsors:

ZFS – errors: Permanent errors have been detected in the following files:

I got the following messages today when I inspected my ZFS:

errors: Permanent errors have been detected in the following files:

        /mypool/data/file1.dat
        /mypool/data/file2.dat
        /mypool/data/file3.dat
        /mypool/data/file4.dat
        /mypool/data/file5.dat

As usual, the first thing I did was to scrub the entire pool, i.e.,

sudo zpool scrub mypool

Unfortunately, it didn’t work. The error still existed even there was no checksum error. So I decided to delete the files manually, and it ended up like this:

errors: Permanent errors have been detected in the following files:

        mypool/data:<0x1fa3a>
        mypool/data:<0x1fa45>
        mypool/data:<0x1fa46>
        mypool/data:<0x1f354>
        mypool/data:<0x1f664>

That’s because when the files were deleted, it simply removed the file pointer. Since ZFS no longer has the file names, it decided to report the location.

To solve this problem, you will need to go through the following:

First, make sure that you have no checksum error and the pool is healthy, i.e., all hard drives are online, and all counts are zero.

Next, try to scrub the pool again:

sudo zpool scrub mypool

Within a minute, try to stop the process:

sudo zpool scrub -s mypool

Check the status again. The error should be gone:

sudo zpool status -v
  pool: mypool
 state: ONLINE
  scan: scrub canceled on Sun Feb  3 12:18:06 2019
errors: No known data errors

If the error still presents, you may need to scrub the pool again.

Our sponsors:

CentOS 7 – dracut-initqueue timeout

I received a Christmas gift from the RHEL 7 / CentOS 7 / Linux kernel team today. After my system got updated to the new kernel (3.10.0-957.1.3.el7.x86_64), my system gave me few surprises. As a sys-admin, I don’t want to see any surprise. What I really want is a working system. That’s one of the reasons why I always suggest people to use FreeBSD if possible. FreeBSD is a truly rock solid system.

Long story short. You have a working system. Your system receive a new kernel (e.g., 3.10.0-957.1.3.el7.x86_64). You decide to boot to that new kernel. Your system takes forever to boot. You go down to the server room, turn on the monitor and see the following messages:

[time stamp]dracut-initqueue [289]: Warning: dracut-initqueue timeout - starting tmeout scripts
[time stamp]dracut-initqueue [289]: Warning: dracut-initqueue timeout - starting tmeout scripts
...
[time stamp]dracut-initqueue [289]: Warning: count not boot.
[time stamp]dracut-initqueue [289]: Warning: /dev/disk/by-uuid/XXX does not exists
   Starting Dracut Emergency Shell...
Warning: /dev/disk/by-uuid/XXX does not exists

Generating "/run/initramfs/rdsosreport.txt"

Entering emergency mode. Exit the shell to continue.
Type "journalctl" to view system logs.
You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot
after mounting them and attach it to a bug report.

dracut:/#

In my situation, my system has no problem to boot into the older kernel. It just does not like the new kernel. In my case, I check my /etc/fstab settings. I disable all of the non-standard devices.

#The following are standard.
UUID=12347890-1234-9512-9518-963852710258       /                       xfs     defaults        0 0
UUID=12347890-1234-4513-7532-963852710258       /boot                   xfs     defaults        0 0
UUID=12347890-1234-9587-8526-963852710258       swap                    swap    defaults        0 0


#The new kernel does not like it. I have to comment it out.
#/storage/data/Dropbox.img                      /Dropbox/               ext4    defaults        0 0

That’s it. I simply mount the image after the system is booted and the problem is solved. This is done by mounting the image in a script and run it in /etc/crontab (@reboot)

The second surprise was even worse. In one of my Linux machines, the OS was installed in the USB drive (because all SATA ports have been used for raid storage). For some odd reasons, the new kernel cannot be booted because the system lives in a USB flash drive. So I tried to install a new CentOS 7 on the same hardware (the installation disk contains the latest kernel) and it is giving me the same results. I ended up installing the OS on a SATA hard drive, which is not what I want because my computer case does not have any extra space of another hard drive.

Sometimes, I think the Linux kernel / system engineers have way too much spare time.

Our sponsors:

“ZFS on Linux”: The ZFS modules are not loaded. Try running ‘/sbin/modprobe zfs’ as root to load them.

Last Updated: May 10, 2020

This article is based on my experience with CentOS 7. If you are running other Linux distributions, please adjust the commands and package names accordingly (e.g., yum –> apt-get).

As of Oct 3, 2019, I cannot get ZFS on Linux running on CentOS 8.

ZFS on Linux is a not robust solution to get ZFS up and running in Linux environments. Unlike FreeBSD, ZFS does not work with the Linux kernel natively. The developers of ZFS on Linux came up a rather crappy solution: By injecting the ZFS into the kernel via DKMS, Linux kernel will understand what is ZFS. It works very well, and it really works with a single assumption: The system will never get updated or rebooted after installing ZFS on Linux. So what will happen after you update the system (e.g., kernel, ZFS on Linux packages) and the system got rebooted? There is a good chance that your ZFS module will not be loaded:

Event What will happen after reboot? What do you need to do?
You update kernel first, then ZFS on Linux afterward
Before Dec 12, 2018: Your system will load the ZFS modules.
Dec 12, 2018 – Dec 2019: Probably not
After Jan 2020: 50/50
Remove the old kernels from DKMS database. Rebuild the ZFS (and SPL if running 0.7.x) modules with the new kernel in the DKMS database.
You update ZFS on Linux first, then kernel afterward If your system boot into the new kernel (which is default), your system WILL NOT load the ZFS modules. Remove and install the ZFS and DKMS packages. Remove the old kernels from DKMS database. Rebuild the SPL and ZFS modules with the new kernel in the DKMS database.
You update ZFS on Linux only. Kernel has not been updated. Your system will load the ZFS modules. Remove the old kernels from DKMS database. Rebuild the SPL and ZFS modules with the new kernel in the DKMS database.
You update kernel only. ZFS on Linux has not been updated. Your system will load the ZFS modules. Remove the old kernels from DKMS database. Rebuild the SPL and ZFS modules with the new kernel in the DKMS database.

There are two steps to rescue your data back. We will start with removing your DKMS module first. If it does not work, we will reinstall the ZFS packages. Also, I am assuming that your system is booted to the new kernel. Please keep in mind that ZFS on Linux does not work with Linux kernel v4 (as of Oct 3, 2019, either via kernel-ml or CentOS 8). It only works with v3.

If you need to access your data, the easiest way is to boot to the old working kernel. Once you are ready to clean up the problem, boot to the new kernel and follow my instructions below.

Step 1: Clean up and Reinstall DKMS Modules

Most of the time, the ZFS on Linux messes up the DKMS modules after the update. I suggest to clean up and reinstall DKMS modules once again. As of December 12, 2018, the ZFS on Linux will remove all of the DKMS modules for no reason.

First, check your DKMS status. You will need to clean up the DKMS if it is empty (nothing is installed), orphan (library is installed, but none of them is attached to any kernel) or multiple (multiple kernels installed). If it is clean (single kernel only), you may skip this step. If you are using ZFS on Linux ver 0.7.x, your DKMS will contain two modules (zfs and spl). If you are using ver. 0.8.x, your DKMS will contain one module only (zfs).

#dkms status

In general, all you want is only one version of DKMS modoule is installed, and it is attached to one kernel only. If you see multiple versions of DKMS modules, or multiple kernels, that’s bad.

#An example of dirty DKMS status (This is bad):
spl, 0.7.12, 3.10.0-862.14.4.el7: installed (original_module exists) (WARNING! Diff between built and installed module!)
spl, 0.7.12, 3.10.0-957.1.3.el7: installed (original_module exists)
zfs, 0.7.12, 3.10.0-862.14.4.el7: installed (original_module exists) (WARNING! Diff between built and installed module!)
zfs, 0.7.12, 3.10.0-957.1.3.el7: installed (original_module exists)

#An example of empty DKMS status (This is bad):
(empty)

#An example of DKMS status without kernal (This is bad):
zfs, 0.7.12: added
spl, 0.7.12: added

#An example of clean DKMS status (This is good):
spl, 0.7.12, 3.10.0-957.1.3.el7.x86_64, x86_64: installed
zfs, 0.7.12, 3.10.0-957.1.3.el7.x86_64, x86_64: installed 

or 

spl, 0.7.12, 3.10.0-957.1.3.el7.x86_64, x86_64: installed (original_module exists)
zfs, 0.7.12, 3.10.0-957.1.3.el7.x86_64, x86_64: installed (original_module exists)

or 

zfs, 0.8.3, 3.10.0-1127.el7.x86_64, x86_64: installed (original_module exists)

In my example above, my ZFS on Linux is 0.7.12, my old kernel is 3.10.0-862.14.4.el7, my new kernel is 3.10.0-957.1.3.el7. Your version may be different.

If your situation is something like the following:

Error! Could not locate dkms.conf file.
File: /var/lib/dkms/zfs/0.8.2/source/dkms.conf does not exist.

That means you have multiple versions of dkms-ZFS modules installed in your system. In my case, the 0.8.3 is running, and the old (0.8.2) is still available. Check the folder (/var/lib/dkms/zfs/) to see if any old libraries need to be removed.

#Currently running: dkms ZFS 0.8.3, kernel 3.10.0-1062.18.1.el7.x86_64

cd /var/lib/dkms/zfs/

#ls -al
total 12K
0.8.2 <---- Delete this
0.8.3
kernel-3.10.0-1062.1.2.el7.x86_64-x86_64 -> 0.8.2/3.10.0-1062.1.2.el7.x86_64/x86_64 <---- Delete this
kernel-3.10.0-1062.4.1.el7.x86_64-x86_64 -> 0.8.2/3.10.0-1062.4.1.el7.x86_64/x86_64 <---- Delete this
kernel-3.10.0-1062.4.3.el7.x86_64-x86_64 -> 0.8.2/3.10.0-1062.4.3.el7.x86_64/x86_64 <---- Delete this
kernel-3.10.0-1062.7.1.el7.x86_64-x86_64 -> 0.8.2/3.10.0-1062.7.1.el7.x86_64/x86_64 <---- Delete this
kernel-3.10.0-1062.9.1.el7.x86_64-x86_64 -> 0.8.3/3.10.0-1062.9.1.el7.x86_64/x86_64 <---- Delete this

You may want to remove both ZFS and SPL DKMS modules first, then reinstall them:

#If your version is 0.7.x:
sudo dkms remove zfs/0.7.12 --all; 
sudo dkms remove spl/0.7.12 --all; 


#If your version is 0.8.x:
sudo dkms remove zfs/0.8.3 --all; 

Sometimes, you will need to remove the old kernel manually:

sudo dkms remove zfs/0.7.12 -k 3.10.0-862.14.4.el7.x86_64; 
sudo dkms remove spl/0.7.12 -k 3.10.0-862.14.4.el7.x86_64;

Time to reinstall them:

#Don't forget to use the version that matches your system. In my situation, it was 0.7.12 / 0.8.3

#0.7.x:
sudo dkms --force install spl/0.7.12; 
sudo dkms --force install zfs/0.7.12;

#0.8.x:
sudo dkms --force install zfs/0.8.3;

Run the DKMS status again. You should see both ZPL and SPL are attached to the new kernel:

#If your version is 0.7.x:
spl, 0.7.12, 3.10.0-957.1.3.el7.x86_64, x86_64: installed
zfs, 0.7.12, 3.10.0-957.1.3.el7.x86_64, x86_64: installed

#If your version is 0.8.x:
zfs, 0.8.3, 3.10.0-1127.el7.x86_64, x86_64: installed

Try to load the ZFS module and import your ZFS data:

sudo /sbin/modprobe zfs
sudo zpool import -a

If everything looks good, you can reboot your system and test to see if the ZFS is loaded automatically or not. Once everything is okay, remove the old kernel from the system.

sudo package-cleanup --oldkernels --count=1 -y

That's it, you are good to go.


Step 2: Reinstall ZFS packages

If you have tried the first step and it didn't work. You may want to reinstall the ZFS packages. Here is a typical error message:

You try to import the ZFS data and the system complains:

#zpool import -a
The ZFS modules are not loaded.
Try running '/sbin/modprobe zfs' as root to load them.

So you try to load the ZFS module and the system complains again:

#/sbin/modprobe zfs
modprobe: FATAL: Module zfs not found.
or
modprobe: ERROR: could not insert 'zfs': Invalid argument

What you need to do is to erase all the ZFS and related packages:

yum erase zfs zfs-dkms libzfs2 spl spl-dkms libzpool2 -y

Please reboot the system. This step is very important.

reboot

After that, try to install ZFS again.

yum install zfs -y

If the system complaints about mismatched dependent packages, try to remove the affected packages first and run the installation again.

After the installation, try to start the ZFS module:

/sbin/modprobe zfs
zpool import -a

If the ZFS is up and running, please clean up your DKMS from step 1. If it complains again, please follow the steps below:

  1. Reboot
  2. Clear the cache of the yum repository and try to update the system again. (sudo yum clean all)
  3. Reboot to the latest kernel
  4. Erase the ZFS and related packages, try it again.

Keep in mind that ZFS on Linux is based on DKMS, a very buggy and unreliable platform. Sometimes when you uninstall and install the packages, don't expect that it will do the same thing as fresh install. Before you send your server to the landfill, try this:

Check the dkms status:

#dkms status
#version 0.7.x
zfs, 0.7.2: added
spl, 0.7.2: added

#version 0.8.x
zfs, 0.8.3: added

If you see this message, that means the ZFS packages have been installed, but the DKMS doesn't know how to use it. You will need to tell DKMS about it:

#version 0.7.x
dkms --force install zfs/0.7.2
dkms --force install spl/0.7.2

#version 0.8.x
dkms --force install zfs/0.8.3
#Try to start ZFS again.
/sbin/modprobe zfs
zpool import -a

If you already tried it for more than 3 times without any luck, don't waste your time. You may want to bring the ZFS disks to a different server. The new server should be able to recognize the ZFS disks. For the original server, you can connect to the ZFS disks on the new server via NFS using the original path. That will minimize the impact of changes.

Keep in mind that the ZFS version is very important. The server with newer ZFS version can read the ZFS disks created in older ZFS versions. You can always check the ZFS versions by running the following:

#Get the version of the host:
sudo zfs upgrade -v
sudo zpool upgrade -v


#Get the version of the ZFS disks:
sudo zfs get version
sudo zpool get version

This is pretty much what I need to do on my 60 servers every month. If you are in a similar situation like mine, I guarantee that you will become an expert of fixing this kind of mess after few months. Good luck!

Our sponsors:

ZFS Cluster: A Network-Based ZFS Implementation

I always like to experimenting the idea of building a ZFS cluster, i.e., it has the robust of the ZFS with the cluster capacity. So I came up a test environment with this prototype.

The idea is pretty simple. Typically when we build the ZFS server, the members of the RAID are the hard drives. In my experiment, I use files instead of hard drives, where the corresponding files live in a network share (mounted via NFS). Since the bottle neck of the I/O will be limited by the network, I include a network bonding to increase the overall bandwidth.

The yellow servers are simply regular servers running ZFS with NFS service. I use the following command to generate a simple file / place holder for ZFS mounting:

#This will create an empty 1TB file, you can think of it as a 1TB hard drive / place holder.
truncate -s 1000G file.img

Make sure that the corresponding NFS service is serving the file.img to the client (the blue server).


The blue server will be the NFS client of the yellow servers, where I will use it to serve the data to other computers. It has the following features:

It has a network bonding based on three Ethernet adapter:

#cat /proc/net/bonding/bond0


Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: enp0s25
MII Status: up
MII Polling Interval (ms): 1
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: enp0s25
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:19:d1:b2:1e:0d
Slave queue ID: 0

Slave Interface: enp6s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:18:4d:f0:12:7b
Slave queue ID: 0

Slave Interface: enp6s1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:22:3f:f6:98:03
Slave queue ID: 0

It mounts the yellow servers via NFS

#df
192.168.1.101:/storage/share   25T  3.9T   21T  16%  /nfs/192-168-1-101
192.168.1.102:/storage/share   8.1T  205G  7.9T   3% /nfs/192-168-1-102
192.168.1.103:/storage/share   8.3T  4.4T  4.0T  52% /nfs/192-168-1-103

The ZFS has the following structure:

#sudo zpool status
        NAME                                  STATE     READ WRITE CKSUM
        storage                               ONLINE       0     0     0
          raidz1-0                            ONLINE       0     0     0
            /nfs/192-168-1-101/file.img       ONLINE       0     0     0
            /nfs/192-168-1-102/file.img       ONLINE       0     0     0
            /nfs/192-168-1-103/file.img       ONLINE       0     0     0

or:

zpool create -f storage raidz /nfs/192-168-1-101/file.img \
                              /nfs/192-168-1-102/file.img \
                              /nfs/192-168-1-103/file.img

The speed of the network will be the limitation of the system, I don’t expect the I/O speed goes beyond 375MB/s (125MB/s x 3). Also since it is a file-based ZFS (the ZFS on the blue server is based on files, not disks), so the overall performance will be discounted.

#Write speed
time dd if=/dev/zero of=/storage/data/file.out bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 3.6717 s, 285 MB/s
#Read speed
time dd if=/storage/data/file.out of=/dev/null
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 3.9618 s, 265 MB/s

Both read and the write speed are roughly around 75% of the maximum bandwidth, which is not bad at all.

So I decide to make one of the yellow servers offline, let’s see what’s going on:

#sudo zpool status
        NAME                                  STATE     READ WRITE CKSUM
        storage                               ONLINE       0     0     0
          raidz1-0                            ONLINE       0     0     0
            /nfs/192-168-1-101/file.img       ONLINE       0     0     0
            /nfs/192-168-1-102/file.img       ONLINE       0     0     0
            /nfs/192-168-1-103/file.img       UNAVAIL      0     0     0 cannot open

And the pool is still functioning, that’s pretty cool!


Here are some notes that will affect the overall performance:

  • The quality of the Ethernet card matters, which includes PCIe or PCI, 1 lane or 16 lane, total throughput etc.
  • The network traffic. Is the switch busy?
  • How are you connect these servers together? One big switch or multiple switches that are bridged together. If they are bridged, the limitation will be the cable of the bridge, which is 125MB/s for gigabit network.

Again, this is just an for experimental purposes only. If you decide to put this in a production environment, do it on your own risk. Have fun!

Our sponsors:

How to install ZFS on RHEL / CentOS 7

A friend of mine likes to try ZFS on CentOS 7, therefore I decide to make a guide for him. The following instructions have been well tested on CentOS 7.

Before you decide to put ZFS in a production use, you should be aware of the following:

  • ZFS is originally designed to work with Solaris and BSD system. Because of the legal and licensing issues, ZFS cannot be shipped with Linux.
  • Since ZFS is open source, some developers port the ZFS the Linux, and make it run at the kernel level via dkms. This works great as long as you don’t update the kernel. Otherwise the ZFS will not be loaded with the new kernel.
  • In a ZFS/Linux environment, it is a bad idea to update the system automatically.
  • For some odd reasons, ZFS/Linux will work with server grade or gaming grade computers. Do not run ZFS/Linux on entry level computers.

Instructions

By default, ZFS is not available in the standard CentOS repository. We will need to include some 3rd party repositories here.

sudo rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
sudo rpm -Uvh http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo rpm -Uvh https://forensics.cert.org/cert-forensics-tools-release-el7.rpm
sudo rpm -Uvh http://download.zfsonlinux.org/epel/zfs-release.el7_6.noarch.rpm

sudo yum update -y
sudo yum groupinstall -y "Development Tools" "Development Libraries" "Additional Development"
sudo yum install -y kernel-devel kernel-headers

It is very likely that the system will install a new kernel. You may want to reboot the computer before installing the ZFS.

sudo reboot

Please make sure that the system does not update automatically. If you need to update the system, please exclude the kernel and related modules from the update.

sudo nano /etc/yum.conf 
exclude=kernel*

Now you are on the latest kernel. Let’s install the ZFS:

sudo yum install -y zfs
sudo /sbin/modprobe zfs

Now, you can create a simple stripped ZFS. Stripped ZFS gives you the best performances and zero data protections. When referencing the disks, we don’t want to use /dev/sd*, instead, we want to use the device id directly, e.g., /dev/disk/by-id/wwn-0x8000c8004e8ac11a

ls /dev/disk/by-id/


lrwxrwxrwx 1 root root  9 Jan  3 21:49 wwn-0x8000c8004e8ac11a -> ../../sde
lrwxrwxrwx 1 root root  9 Jan  3 21:49 wwn-0x8000c8008ad0a22d -> ../../sdd
lrwxrwxrwx 1 root root  9 Jan  3 21:49 wwn-0x8000c8008b4f6338 -> ../../sda
lrwxrwxrwx 1 root root  9 Jan  3 21:49 wwn-0x8000c8008b52144c -> ../../sdc
lrwxrwxrwx 1 root root  9 Jan  3 21:49 wwn-0x8000c8008b59a553 -> ../../sdb

Once you identify the list of the hard disks, we can create a simple stripped ZFS. This will create a ZFS under /storage. You can replace storage to anything you like.

#We are going to create a ZFS pool with three disks. You can add more if you like. For stripped design, the higher number of disks, the faster the IO speed.
zpool create -f storage /dev/disk/by-id/device1 /dev/disk/by-id/device2 /dev/disk/by-id/device3 

storage is like a big umbrella. Under this umbrella, we will need to create multiple “partitions” for storing data.

zfs create storage/mydata

If you have a fast CPU like i7, you may want to turn on the compression. This will reduce the amount of data write to the system, and it will improve the overall performance.

sudo zfs set compression=lz4 storage

Finally we want to change the ownership and the permissions

#Assuming that you are part of the wheel group
sudo chown -R root:wheel /storage
sudo chmod -R g+rw /storage

Now, run df and you should be able to see the ZFS in your system.

#df -h
Filesystem                   Size  Used Avail Use% Mounted on
storage                      6.8T  128K  6.8T   1% /storage
storage/mydata                13T  6.1T  6.8T  48% /storage/mydata

You can monitor the health of the ZFS system.

#sudo zpool status



  pool: storage
 state: ONLINE
config:

        NAME                        STATE     READ WRITE CKSUM
        storage                     ONLINE       0     0     0
            wwn-0x8000c8004e8ac11a  ONLINE       0     0     0
            wwn-0x8000c8008ad0a22d  ONLINE       0     0     0
            wwn-0x8000c8008b4f6338  ONLINE       0     0     0
            wwn-0x8000c8008b52144c  ONLINE       0     0     0


errors: No known data errors

For some very odd reasons, ZFS will not be loaded automatically. We want to make sure that ZFS will be loaded after reboot.

#sudo nano /etc/crontab

#Add the following:
@reboot         root    sleep 10; zpool import -a;

Now you can try to test the ZFS by running dd or copying a big file to the ZFS. If you are not happy with the configurations, you can always destroy it and re-create the ZFS again.

sudo zpool destroy storage

Further Reading

Have fun!

Our sponsors: