ZFS is the next generation file system. Unfortunately, it won’t be shipped with Linux because of legal/licensing issues. Fortunately, it is possible to install it (ZFS on Linux) in few commands. Since 2013, I have set up a number of Linux (CentOS/RHEL) servers with ZFS for use in a high traffic production environment. They include high-end commercial grade server (Xeon-based + ECC memory), gaming quality desktop (i7-based) and entry-level consumer grade computer (i3). In this article, I will discuss about what I have learned from my experience.
Warning on ZFS on Linux
ZFS on Linux is not a robust solution to implement ZFS on Linux because it has a very important (and impossible) requirement: The system will never get updated and rebooted. If you cannot meet this requirement (obviously), be prepare to spend tons of hours to fix the problem and get your data back. See how I fix the problem created by ZFS on Linux here. If you prefer rock solid and reliable way, you have to go with *BSD or Solaris.
Summary
Life is short. If you don’t want to waste your time to go through the entire article, here is my advice: Use FreeBSD (or *BSD) if possible. Using ZFS on Linux is like putting a giraffe in the wild Alaska. It is not going to work. However, we may want to stick with one operating system for server for various reasons. Therefore, I’ve come up some advice for you if you really want to run ZFS on Linux:
- Use a commercial grade server when it is possible. A bare-bone entry-level Dell Power Edge T110 II (starting from US$300) is sufficient to run ZFS as a low traffic, light load, nightly backup server. Consumer grade computer is not recommended for use in ZFS/Linux. If you really need one, get a computer with gaming quality grade components and always back up the data on a different server.
- Linux kernel plays an important role to ZFS. Try to use v.3 (e.g., RHEL 7) when possible. Using ZFS with v. 2.6 (e.g., RHEL 6) may cause some unexpected problem to non-commerical grade hardware. As of October 2019, I cannot make version 4 (e.g., install via kernel-ml or CentOS 8) works with ZFS on RHEL 7:
Loading new spl-0.6.5.9 DKMS files... Building for 4.11.2-1.el7.elrepo.x86_64 Building initial module for 4.11.2-1.el7.elrepo.x86_64 configure: error: unknown Error! Bad return status for module build on kernel: 4.11.2-1.el7.elrepo.x86_64 (x86_64) Consult /var/lib/dkms/spl/0.6.5.9/build/make.log for more information.
- Set up your ZFS with the hard drive identifier (e.g., /dev/disk/by-id/someid), not the generic device id (e.g., /dev/sda).
- You may lose some storage space (smaller than 1%) comparing to the same setup in FreeBSD. But the amount is trivial.
- If you already install ZFS on Linux, try to exclude the kernel from system update. The system will not load the ZFS after reboot, and it will take some extra work to get ZFS running again.
- Some Linux distributions such as CentOS 7 will not load ZFS at the boot time. You can solve this problem by using cron job. If you have other services (e.g., MySQL, NFS, Apache) that depends on the ZFS, you will need to restart them.
- Bookmark this ZFS emergency recovery guide. Trust me, you never know when your ZFS on Linux decide to stop working.
Do not update the kernel automatically
I’ve wrote an article on how to rescue your ZFS file system after updating the kernel. Please click here for details.
ZFS is not native in Linux. The whole idea of ZFS on Linux is nothing more than a brunch of modules being injected to the kernel, such that the kernel will load the ZFS at boot time. This is a fantastic idea because it will not introduce the performance problem like ZFS/FUSE (running on the user land, i.e., very slow). However, there is a potential problem here. This “injection” only happens when a ZFS module (zfs-kmod) is needed to be installed or updated. During this process, the system will download the latest copy of the zfs-kmod and injecting it to the current running kernel. See the problem here?
That being said, running root (/) on ZFS in Linux is a very very bad idea. You will not be able to access anything when the ZFS is not available at the kernel level.
So we have four different situations here after hitting the update command:
In general, if you really need to update the kernel, you will need to update the kernel first, reboot to the new kernel (ZFS will be missing), and re-run the process such that ZFS module will be injected to the new kernel. Some people may recommend to uninstall the zfs-kmod and reinstall it again. Unless you have a very strong reason to use the latest kernel (e.g., you’ve got plenty of spare time), otherwise I won’t recommend doing it because the whole process is a pain.
Another thing you can do is to disable the auto update. Only update the system when there is a new update for both kernel and the zfs-kmod. Then you can update the kernel first, reboot, and then update the zfs-kmod after the reboot. However, keep in mind that you will run into some problem eventually. Many packages depend on the newer kernel, if you try to update the system, it will complaint because you will need to update the kernel first before updating those packages. You can get around by skipping the broken packages (yum update –skip-broekn).
In my settings, I simply exclude the kernel from the update. That way I only need to work with one kernel, and I know that that particular kernel knows how to handle ZFS module.
sudo nano /etc/yum.conf exclude=kernel*
In case you are running into trouble, i.e., ZFS is missing in the latest kernel, you can try doing the following:
Before running the following commands, make sure that you know what you are doing.
#Make sure that you reboot to the kernel you want to fix. #Find out what is the current kernel uname -a Linux 3.10.0-514.2.2.el7.x86_64 #1 SMP Tue Dec 6 23:06:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux #In my example, it is: 3.10.0-514.2.2.el7.x86_64 #Basically we want to remove the following files: ls -al /lib/modules/your_new_kernel/extra -rw-r--r-- 1 root root 344K Dec 12 15:58 splat.ko -rw-r--r-- 1 root root 167K Dec 12 15:58 spl.ko -rw-r--r-- 1 root root 14K Dec 12 16:02 zavl.ko -rw-r--r-- 1 root root 75K Dec 12 16:02 zcommon.ko -rw-r--r-- 1 root root 2.2M Dec 12 16:02 zfs.ko -rw-r--r-- 1 root root 130K Dec 12 16:02 znvpair.ko -rw-r--r-- 1 root root 34K Dec 12 16:02 zpios.ko -rw-r--r-- 1 root root 324K Dec 12 16:02 zunicode.ko #If you have no extra modules installed other than ZFS and SPL, you can run the following: sudo rm -Rf /lib/modules/*/extra/* #Otherwise just remove the files one by one. #And we want to do the same thing to the weak-updates. ls -al /lib/modules/your_new_kernel/weak-updates drwxr-xr-x. 2 root root 4.0K Sep 16 10:58 . drwxr-xr-x. 7 root root 4.0K Sep 16 10:58 .. lrwxrwxrwx 1 root root 54 Sep 16 10:56 splat.ko -> /lib/modules/2.6.32-573.18.1.el6.x86_64/extra/splat.ko lrwxrwxrwx 1 root root 52 Sep 16 10:56 spl.ko -> /lib/modules/2.6.32-573.18.1.el6.x86_64/extra/spl.ko lrwxrwxrwx 1 root root 53 Feb 22 2016 zavl.ko -> /lib/modules/2.6.32-573.18.1.el6.x86_64/extra/zavl.ko lrwxrwxrwx 1 root root 56 Feb 22 2016 zcommon.ko -> /lib/modules/2.6.32-573.18.1.el6.x86_64/extra/zcommon.ko lrwxrwxrwx 1 root root 52 Sep 16 10:58 zfs.ko -> /lib/modules/2.6.32-573.18.1.el6.x86_64/extra/zfs.ko lrwxrwxrwx 1 root root 56 Sep 16 10:58 znvpair.ko -> /lib/modules/2.6.32-573.18.1.el6.x86_64/extra/znvpair.ko lrwxrwxrwx 1 root root 54 Feb 22 2016 zpios.ko -> /lib/modules/2.6.32-573.18.1.el6.x86_64/extra/zpios.ko lrwxrwxrwx 1 root root 57 Feb 22 2016 zunicode.ko -> /lib/modules/2.6.32-573.18.1.el6.x86_64/extra/zunicode.ko #If you have no extra modules installed other than ZFS and SPL, you can run the following: sudo rm -Rf /lib/modules/*/weak-updates/* #Otherwise just remove the files one by one. #Now, let's get into the fun part. We will remove them and reinstall them. #Don't forget to match your version. sudo dkms remove zfs/0.6.5.8 --all sudo dkms remove spl/0.6.5.8 --all sudo dkms --force install spl/0.6.5.8 sudo dkms --force install zfs/0.6.5.8
And we will verify the result.
#sudo dkms status spl, 0.6.5.8, 3.10.0-514.2.2.el7.x86_64, x86_64: installed zfs, 0.6.5.8, 3.10.0-514.2.2.el7.x86_64, x86_64: installed zfs, 0.6.5.8, 3.10.0-327.28.3.el7.x86_64, x86_64: installed-weak from 3.10.0-514.2.2.el7.x86_64
The Kernel Version Matters
The kernel version does matter, and I will avoid using version 2.6 or below if you don’t have a professional grade hardware, such as Xeon CPU. Here is my comment:
9 & 10
(Intel Xeon E3-1240 V2, 8GB memory, US$250)
(Intel Xeon E5-2430, 64GB memory, US$2,000)
(Intel i7-4770, 32GB memory, US$900)
(Intel i3-540, 8GB memory, US$500)
However, it doesn’t mean that you should always use the latest kernel. Remember one thing, always keep a copy of the previous kernel before switching to the latest one. You never know whether ZFS will work with the latest one or not. For example, I have a big trouble to get ZFS working with 2.6.32-573.7.1.el6.x86_64, which is the latest kernel available on CentOS 6.7 (as of Oct 26, 2015). I ended up switching the system to 2.6.32-573.3.1.el6.x86_64 (-1 kernel). So always test the system before making the switch.
The Hard Drive Identifier
Set up your ZFS with the unique, non-changeable hard drive identifier (e.g., /dev/disk/by-id/wwn-0x1234c567890d0aaa). Do not use the generic device id (e.g., /dev/sda). When you reboot the system, the generic device id (/dev/sda) may get changed. This will be a problem to the ZFS.
For example, when RHEL 7 names the hard drive, it will name the hard drives that are attached directly to the motherboard first, these includes USB flash drives, SD card etc. After that, it will name the hard drives that are attached to the PCIe raid card. When you boot the computer with a USB flash drive attached, and if the USB flash drive was not available at the time you set up the ZFS, this small change is good enough to mess up your ZFS.
Here is an example:
History for 'storage': zpool create -f storage raidz /dev/disk/by-id/wwn-0x5000c500206e46d4 \ /dev/disk/by-id/wwn-0x5000c500205eba0d \ /dev/disk/by-id/wwn-0x50014ee25a9074e2 \ /dev/disk/by-id/wwn-0x50024e9001c19fb2
So far I only noticed this problem with low-end / consumer grade motherboard. However, this is not a problem with FreeBSD because it is smart enough to re-map the old values.
The Stability
For some odd reasons, the ZFS will be unstable or even unavailable when the I/O is heavy:
pool: storage state: DEGRADED scan: none requested status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. config: NAME STATE READ WRITE CKSUM storage DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 wwn-0x5000c500206e46d4 ONLINE 0 0 0 wwn-0x5000c500205eba0d ONLINE 0 0 0 wwn-0x50014ee25a9074e2 ONLINE 0 0 0 wwn-0x50024e9001c19fb2 UNAVAIL 0 0 0
This kind of problem happens mainly with low-end consumer grade computer with older kernel. Once I upgraded the kernel to a newer version, the problem is gone. No hardware change is needed. Again, I’ve never experienced this kind of problem since FreeBSD 9. The only explanation I can think of is the older Linux Kernel does not support ZFS and low-end computer very well.
Load ZFS at Boot
Some Linux variants such as CentOS 7 will not load ZFS at boot (in my case, my kernel is 3.10.0-327.28.3.el7.x86_64). I choose to run the ZFS via cron job. What if the ZFS contains the files that are required by some service, e.g, your database or web server files are on ZFS? You will need to restart those services after loading the ZFS. Here is an example:
sudo nano /etc/crontab #Example 1: Load all available ZFS pools @reboot root sleep 20; zpool import -a; #Example 2: Load all ZFS pools first, then restart the Apache, MySQL and NFS services @reboot root sleep 20; zpool import -a; sleep 15; systemctl restart httpd.service && systemctl restart mariadb.service && systemctl restart nfs-server;
Good luck!
–Derrick
Our sponsors: