ZFS on Linux is a poorly designed software solution to get ZFS up and running in Linux environments. Unlike FreeBSD, ZFS does not work with the Linux kernel natively. The developers of ZFS on Linux come up a creative hack. By injecting the ZFS into the kernel via DKMS, Linux kernel will understand what is ZFS. It works very well, and it really works with a single assumption: The system will never be updated or rebooted after installing ZFS on Linux. So what will happen after you update the system such as kernel or the system got rebooted? There is a good chance that your ZFS module will not be loaded. Otherwise you won’t be reading this article, right?

You just can’t assume that a MINI works like a semi truck. MINI was created for a totally different purposes. If you want to use a MINI like a semi-truck, then you have to accept the corresponding risks.

Long story short, below is how you get your data back:


You reboot your computer after a system upgrade (either updating to a new kernel, or updating the DKMS library, or both), and you notice that the ZFS is not loaded, i.e.,

#zpool import -a
The ZFS modules are not loaded.
Try running '/sbin/modprobe zfs' as root to load them.


#/sbin/modprobe zfs
modprobe: FATAL: Module zfs not found.



#dkms status
spl, 0.7.7, 3.10.0-693.17.1.el7.x86_64, x86_64: installed (original_module exists)
spl, 0.7.7, 3.10.0-693.21.1.el7.x86_64, x86_64: installed
zfs, 0.7.7, 3.10.0-693.17.1.el7.x86_64, x86_64: installed (original_module exists) (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!)
zfs, 0.7.7, 3.10.0-693.21.1.el7.x86_64, x86_64: installed (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!)

What you need to do is to erase all the ZFS and related packages:

yum erase zfs zfs-dkms libzfs2 spl spl-dkms libzpool2 -y

Please reboot the system. This step is very important.

reboot

After that, try to install ZFS again.

yum install zfs -y

If the system complaints about mismatch dependent packages, try to remove the affected packages first and run the installation again.

After the installation, try to start the ZFS module:

/sbin/modprobe zfs

zpool import -a

If the ZFS is up and running, you may move to next step. Otherwise, I suggest you to do the following:

  • Reboot
  • Clear the cache of the yum repository and try to update the system again. (sudo yum clean all)
  • Reboot to the latest kernel
  • Erase the ZFS and related packages and try it again.

ZFS is portable. If you are unable to get it works, try to bring the ZFS disks to other servers, the new server should be able to recognize the ZFS disks. For the original server, you can connect to the ZFS disks on the new server via NFS using the original path. That will minimize the impact of changes.

Assuming that your ZFS is up and running, we will need to check the DKMS status:

#dkms status

If you see no error message, you are good to go.

spl, 0.7.8, 3.10.0-693.21.1.el7.x86_64, x86_64: installed (original_module exists)
zfs, 0.7.8, 3.10.0-693.21.1.el7.x86_64, x86_64: installed (original_module exists)

If you see an error message like the following, you may need to rebuild the modules again:

spl, 0.7.8, 3.10.0-693.21.1.el7.x86_64, x86_64: installed (original_module exists)
zfs, 0.7.8, 3.10.0-693.21.1.el7.x86_64, x86_64: installed (original_module exists) (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!)
sudo dkms remove zfs/0.7.8 --all
sudo dkms remove spl/0.7.8 --all
sudo dkms --force install spl/0.7.8
sudo dkms --force install zfs/0.7.8
sudo dkms status

Now you understand why no professional truckers use MINI for work. Good luck!

Our sponsors:

I always like to experimenting the idea of building a ZFS cluster, i.e., it has the robust of the ZFS with the cluster capacity. So I came up a test environment with this prototype.

The idea is pretty simple. Typically when we build the ZFS server, the members of the RAID are the hard drives. In my experiment, I use files instead of hard drives, where the corresponding files live in a network share (mounted via NFS). Since the bottle neck of the I/O will be limited by the network, I include a network bonding to increase the overall bandwidth.

The yellow servers are simply regular servers running ZFS with NFS service. I use the following command to generate a simple file / place holder for ZFS mounting:

#This will create an empty 1TB file, you can think of it as a 1TB hard drive / place holder.
truncate -s 1000G file.img

Make sure that the corresponding NFS service is serving the file.img to the client (the blue server).


The blue server will be the NFS client of the yellow servers, where I will use it to serve the data to other computers. It has the following features:

It has a network bonding based on three Ethernet adapter:

#cat /proc/net/bonding/bond0


Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: enp0s25
MII Status: up
MII Polling Interval (ms): 1
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: enp0s25
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:19:d1:b2:1e:0d
Slave queue ID: 0

Slave Interface: enp6s0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:18:4d:f0:12:7b
Slave queue ID: 0

Slave Interface: enp6s1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 00:22:3f:f6:98:03
Slave queue ID: 0

It mounts the yellow servers via NFS

#df
192.168.1.101:/storage/share   25T  3.9T   21T  16%  /nfs/192-168-1-101
192.168.1.102:/storage/share   8.1T  205G  7.9T   3% /nfs/192-168-1-102
192.168.1.103:/storage/share   8.3T  4.4T  4.0T  52% /nfs/192-168-1-103

The ZFS has the following structure:

#sudo zpool status
        NAME                                  STATE     READ WRITE CKSUM
        storage                               ONLINE       0     0     0
          raidz1-0                            ONLINE       0     0     0
            /nfs/192-168-1-101/file.img       ONLINE       0     0     0
            /nfs/192-168-1-102/file.img       ONLINE       0     0     0
            /nfs/192-168-1-103/file.img       ONLINE       0     0     0

or:

zpool create -f storage raidz /nfs/192-168-1-101/file.img \
                              /nfs/192-168-1-102/file.img \
                              /nfs/192-168-1-103/file.img

The speed of the network will be the limitation of the system, I don’t expect the I/O speed goes beyond 375MB/s (125MB/s x 3). Also since it is a file-based ZFS (the ZFS on the blue server is based on files, not disks), so the overall performance will be discounted.

#Write speed
time dd if=/dev/zero of=/storage/data/file.out bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 3.6717 s, 285 MB/s
#Write speed
time dd if=/storage/data/file.out of=/dev/null
1000+0 records in
1000+0 records out
1048576000 bytes (1.0 GB) copied, 3.9618 s, 265 MB/s

Both read and the write speed are roughly around 75% of the maximum bandwidth, which is not bad at all.

So I decide to make one of the yellow servers offline, let’s see what’s going on:

#sudo zpool status
        NAME                                  STATE     READ WRITE CKSUM
        storage                               ONLINE       0     0     0
          raidz1-0                            ONLINE       0     0     0
            /nfs/192-168-1-101/file.img       ONLINE       0     0     0
            /nfs/192-168-1-102/file.img       ONLINE       0     0     0
            /nfs/192-168-1-103/file.img       UNAVAIL      0     0     0 cannot open

And the pool is still functioning, that’s pretty cool!


Here are some notes that will affect the overall performance:

  • The quality of the Ethernet card matters, which includes PCIe or PCI, 1 lane or 16 lane, total throughput etc.
  • The network traffic. Is the switch busy?
  • How are you connect these servers together? One big switch or multiple switches that are bridged together. If they are bridged, the limitation will be the cable of the bridge, which is 125MB/s for gigabit network.

Again, this is just an for experimental purposes only. If you decide to put this in a production environment, do it on your own risk. Have fun!

Our sponsors:

A friend of mine likes to try ZFS on CentOS 7, therefore I decide to make a guide for him. The following instructions have been well tested on CentOS 7.

Before you decide to put ZFS in a production use, you should be aware of the following:

  • ZFS is originally designed to work with Solaris and BSD system. Because of the legal and licensing issues, ZFS cannot be shipped with Linux.
  • Since ZFS is open source, some developers port the ZFS the Linux, and make it run at the kernel level via dkms. This works great as long as you don’t update the kernel. Otherwise the ZFS will not be loaded with the new kernel.
  • In a ZFS/Linux environment, it is a bad idea to update the system automatically.
  • For some odd reasons, ZFS/Linux will work with server grade or gaming grade computers. Do not run ZFS/Linux on entry level computers.

Instructions

By default, ZFS is not available in the standard CentOS repository. We will need to include some 3rd party repositories here.

sudo rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm
sudo rpm -Uvh http://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo rpm -Uvh https://forensics.cert.org/cert-forensics-tools-release-el7.rpm
sudo rpm -Uvh http://download.zfsonlinux.org/epel/zfs-release.el7_3.noarch.rpm

sudo yum update -y
sudo yum groupinstall -y "Development Tools" "Development Libraries" "Additional Development"
sudo yum install -y kernel-devel kernel-headers

It is very likely that the system will install a new kernel. You may want to reboot the computer before installing the ZFS.

sudo reboot

Please make sure that the system does not update automatically. If you need to update the system, please exclude the kernel and related modules from the update.

sudo nano /etc/yum.conf 
exclude=kernel*

Now you are on the latest kernel. Let’s install the ZFS:

sudo yum install -y zfs
sudo /sbin/modprobe zfs

Now, you can create a simple stripped ZFS. Stripped ZFS gives you the best performances and zero data protections. When referencing the disks, we don’t want to use /dev/sd*, instead, we want to use the device id directly, e.g., /dev/disk/by-id/wwn-0x8000c8004e8ac11a

ls /dev/disk/by-id/


lrwxrwxrwx 1 root root  9 Jan  3 21:49 wwn-0x8000c8004e8ac11a -> ../../sde
lrwxrwxrwx 1 root root  9 Jan  3 21:49 wwn-0x8000c8008ad0a22d -> ../../sdd
lrwxrwxrwx 1 root root  9 Jan  3 21:49 wwn-0x8000c8008b4f6338 -> ../../sda
lrwxrwxrwx 1 root root  9 Jan  3 21:49 wwn-0x8000c8008b52144c -> ../../sdc
lrwxrwxrwx 1 root root  9 Jan  3 21:49 wwn-0x8000c8008b59a553 -> ../../sdb

Once you identify the list of the hard disks, we can create a simple stripped ZFS. This will create a ZFS under /storage. You can replace storage to anything you like.

#We are going to create a ZFS pool with three disks. You can add more if you like. For stripped design, the higher number of disks, the faster the IO speed.
zpool create -f storage /dev/disk/by-id/device1 /dev/disk/by-id/device2 /dev/disk/by-id/device3 

storage is like a big umbrella. Under this umbrella, we will need to create multiple “partitions” for storing data.

zfs create storage/mydata

If you have a fast CPU like i7, you may want to turn on the compression. This will reduce the amount of data write to the system, and it will improve the overall performance.

sudo zfs set compression=lz4 storage

Finally we want to change the ownership and the permissions

#Assuming that you are part of the wheel group
sudo chown -R root:wheel /storage
sudo chmod -R g+rw /storage

Now, run df and you should be able to see the ZFS in your system.

#df -h
Filesystem                   Size  Used Avail Use% Mounted on
storage                      6.8T  128K  6.8T   1% /storage
storage/mydata                13T  6.1T  6.8T  48% /storage/mydata

You can monitor the health of the ZFS system.

#sudo zpool status



  pool: storage
 state: ONLINE
config:

        NAME                        STATE     READ WRITE CKSUM
        storage                     ONLINE       0     0     0
            wwn-0x8000c8004e8ac11a  ONLINE       0     0     0
            wwn-0x8000c8008ad0a22d  ONLINE       0     0     0
            wwn-0x8000c8008b4f6338  ONLINE       0     0     0
            wwn-0x8000c8008b52144c  ONLINE       0     0     0


errors: No known data errors

For some very odd reasons, ZFS will not be loaded automatically. We want to make sure that ZFS will be loaded after reboot.

#sudo nano /etc/crontab

#Add the following:
@reboot         root    sleep 10; zpool import -a;

Now you can try to test the ZFS by running dd or copying a big file to the ZFS. If you are not happy with the configurations, you can always destroy it and re-create the ZFS again.

sudo zpool destroy storage

Further Reading

Have fun!

Our sponsors:

Let’s all agree with this fact: ZFS is foreign to Linux. It is not native. You can’t expect that ZFS on Linux will run smoothly as FreeBSD or Solaris. Having using ZFS on Linux since 2013 (and ZFS on FreeBSD since 2009), I’ve noticed that ZFS does not like Linux (well, at least RHEL 7). Here are some few examples:

  • ZFS is not loaded at the boot time. You will need to manually start it or load it via cron. Good luck if you have other services (like Apache, MySQL, NFS, or even users’ home directories) that depend on the ZFS.
  • Every single time you update the kernel, ZFS will not work after the reboot without some manual work. What if the system runs the update automatically, and one day there is a power failure which makes your server to reboot to a new kernel? Your system will not be able to mount your ZFS volume. If you integrate ZFS with other service applications such as web, database or network drive, oh well, good luck and I hope you will catch this problem fast enough before receiving thousands of emails and calls from your end-users.
  • If you exclude the kernel from the updates (/etc/yum.conf), you will eventually run into trouble, because there are tons of other packages that require the latest kernel. In the other words, running the command: yum update -y will fail. You will need to run yum update –skip-broken, which means you will miss many latest packages. Here is an example:
    --> Finished Dependency Resolution
    Error: Package: hypervvssd-0-0.29.20160216git.el7.x86_64 (base)
               Requires: kernel >= 3.10.0-384.el7
               Installed: kernel-3.10.0-327.el7.x86_64 (@anaconda)
                   kernel = 3.10.0-327.el7
               Installed: kernel-3.10.0-327.22.2.el7.x86_64 (@updates)
                   kernel = 3.10.0-327.22.2.el7
    Error: Package: hypervfcopyd-0-0.29.20160216git.el7.x86_64 (base)
               Requires: kernel >= 3.10.0-384.el7
               Installed: kernel-3.10.0-327.el7.x86_64 (@anaconda)
                   kernel = 3.10.0-327.el7
               Installed: kernel-3.10.0-327.22.2.el7.x86_64 (@updates)
                   kernel = 3.10.0-327.22.2.el7
    Error: Package: hypervkvpd-0-0.29.20160216git.el7.x86_64 (base)
               Requires: kernel >= 3.10.0-384.el7
               Installed: kernel-3.10.0-327.el7.x86_64 (@anaconda)
                   kernel = 3.10.0-327.el7
               Installed: kernel-3.10.0-327.22.2.el7.x86_64 (@updates)
                   kernel = 3.10.0-327.22.2.el7
     You could try using --skip-broken to work around the problem
     You could try running: rpm -Va --nofiles --nodigest
    
  • If you are running the stable Linux distributions like RHEL 7, you can load a more recent kernel like 4.x by installing the package: kernel-ml. However, don’t expect that ZFS will work with version 4:
    Loading new spl-0.6.5.9 DKMS files...
    Building for 4.11.2-1.el7.elrepo.x86_64
    Building initial module for 4.11.2-1.el7.elrepo.x86_64
    configure: error: unknown
    Error! Bad return status for module build on kernel: 4.11.2-1.el7.elrepo.x86_64 (x86_64)
    Consult /var/lib/dkms/spl/0.6.5.9/build/make.log for more information.
    
    

Running ZFS on Linux is like putting a giraffe in the wild in Alaska. It is just not the right thing to do. Unfortunately, there are so many things that only available on Linux so we have to live with it. Just like FUSE (Filesystem in Userspace), many people feel hesitated to run their file systems on the userspace instead of kernel level, but hey, see how many people are happy with GlusterFS, a distributed file system that live on FUSE! Personally I just think it is not a right thing to do, especially in an enterprise environment. Running a production file system at the userspace level, seriously?

Anyway, if you are running into trouble after upgrading your Linux kernel (and you almost had a heart attack when you think your data may be lost), you have two choices:

  1. Simply boot to the previous working kernel if you need to get your data back in quick. However, keep in mind that this will create two problems:
    • Since you already update the system with the new kernel and the new packages, your new packages probably will not work with the old kernel, and that may give you extra headache.
    • Unless you manually overwrite the kernel boot order (boot loader config), otherwise you may get into the same trouble in the next boot.
  2. If you want a more “permanent” fix, you will need to rebuild the dkms ZFS and SPL modules. See below for the instructions. Keep in mind that you will have the same problem again when the kernel receives a new update.

You’ve tried to load the ZFS and realize that it is no longer available:

#sudo zpool import
The ZFS modules are not loaded.
Try running '/sbin/modprobe zfs' as root to load them.

#sudo /sbin/modprobe zfs
modprobe: FATAL: Module zfs not found.

You may want to check the dkms status. Write down the version number. In my case, it is 0.6.5.9

#sudo dkms status
spl, 0.6.5.9, 3.10.0-327.28.3.el7.x86_64, x86_64: installed (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!)
spl, 0.6.5.9, 3.10.0-514.2.2.el7.x86_64, x86_64: installed (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!)
zfs, 0.6.5.9, 3.10.0-327.28.3.el7.x86_64, x86_64: installed (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!) (WARNING! Diff between built and installed module!)

Before running the following commands, make sure that you know what you are doing.


#Make sure that you reboot to the kernel you want to fix.
#Find out what is the current kernel
uname -a
Linux 3.10.0-514.2.2.el7.x86_64 #1 SMP Tue Dec 6 23:06:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

#In my example, it is:
3.10.0-514.2.2.el7.x86_64

#Now, let's get into the fun part. We will remove them and reinstall them.
#Don't forget to match your version, in my base, my version is: 0.6.5.9
sudo dkms remove zfs/0.6.5.9 --all
sudo dkms remove spl/0.6.5.9 --all
sudo dkms --force install spl/0.6.5.9
sudo dkms --force install zfs/0.6.5.9

#or you can run these commands in one line, so that you don't need to wait:
sudo dkms remove zfs/0.6.5.9 --all; sudo dkms remove spl/0.6.5.9 --all; sudo dkms --force install spl/0.6.5.9; sudo dkms --force install zfs/0.6.5.9;

And we will verify the result.

#sudo dkms status
spl, 0.6.5.9, 3.10.0-514.2.2.el7.x86_64, x86_64: installed
zfs, 0.6.5.9, 3.10.0-514.2.2.el7.x86_64, x86_64: installed

Finally we can start the ZFS again.

sudo /sbin/modprobe zfs

Your ZFS pool should back. You can verify it by rebooting your machine. Notice that Linux may not automatically mount the ZFS volumes. You may want to mount it manually or via cron job.

Here is how to mount the ZFS volumes manually.

sudo zpool import -a

You may want to remove all of the old kernels too.

sudo package-cleanup --oldkernels --count=1 -y

Our sponsors:

Today I was trying to install ZFS on a CentOS 7 box. Typically rebooting the computer, the ZFS mododule will be turned on. However, it didn’t turn on in my case.

Failed to load ZFS module stack.
Load the module manually by running 'insmod /zfs.ko' as root.

So I tried to turn on the module:

#sudo modprobe zfs
modprobe: ERROR: could not insert 'zfs': Required key not available.

Turn out this is a newer machine with UEFI available. It has something to do with the secure boot. After I reboot the machine and log in to the BIOS menu, turn on the secure boot feature, and everything is working again.

Have fun with ZFS.

Our sponsors:

ZFS is the next generation file system. Unfortunately, it won’t be shipped with Linux because of legal/licensing issues. Fortunately, it is possible to install it (ZFS on Linux) in few commands. Since 2013, I have set up a number of Linux (CentOS/RHEL) servers with ZFS for use in a high traffic production environment. They include high-end commercial grade server (Xeon-based + ECC memory), gaming quality desktop (i7-based) and entry-level consumer grade computer (i3). In this article, I will discuss about what I have learned from my experience.

Warning on ZFS on Linux

ZFS on Linux is not a rock solid, reliability and stable way to implement ZFS on Linux because it has a very important (and impossible) requirement: The system will never get updated and rebooted. If you cannot meet this requirement (obviously), be prepare to spend tons of hours to fix the problem and get your data back. See how I fix the mess created by ZFS on Linux here. If you prefer rock solid and reliable way, you have to go with *BSD or Solaris.

Summary

Life is short. If you don’t want to waste your time to go through the entire article, here is my advice: Use FreeBSD (or *BSD) if possible. Using ZFS on Linux is like putting a giraffe in the wild Alaska. It is not going to work. However, we may want to stick with one operating system for server for various reasons. Therefore, I’ve come up some advice for you if you really want to run ZFS on Linux:

  • Use a commercial grade server when it is possible. A bare-bone entry-level Dell Power Edge T110 II (starting from US$450) is sufficient to run ZFS as a low traffic, light load, nightly backup server. Consumer grade computer is not recommended for use in ZFS/Linux. If you really need one, get a computer with gaming quality grade component and always back up the data on a different server.
  • Linux kernel plays an important role to ZFS. Try to use v.3 (e.g., RHEL 7) when possible. Using ZFS with v. 2.6 (e.g., RHEL 6) may cause some unexpected problem to non-commerical grade hardware. So far I cannot make version 4 (e.g., install via kernel-ml) works with ZFS on RHEL 7:
    Loading new spl-0.6.5.9 DKMS files...
    Building for 4.11.2-1.el7.elrepo.x86_64
    Building initial module for 4.11.2-1.el7.elrepo.x86_64
    configure: error: unknown
    Error! Bad return status for module build on kernel: 4.11.2-1.el7.elrepo.x86_64 (x86_64)
    Consult /var/lib/dkms/spl/0.6.5.9/build/make.log for more information.
    
  • Set up your ZFS with the hard drive identifier (e.g., /dev/disk/by-id/someid), not the generic device id (e.g., /dev/sda).
  • You may lose some storage space (smaller than 1%) comparing to the same setup in FreeBSD. But the amount is trivial.
  • If you already install ZFS on Linux, try to exclude the kernel from system update. The system will not load the ZFS after reboot, and it will take some extra work to get ZFS running again.
  • Some Linux distributions such as CentOS 7 will not load ZFS at the boot time. You can solve this problem by using cron job. If you have other services (e.g., MySQL, NFS, Apache) that depends on the ZFS, you will need to restart them.
  • Bookmark this ZFS emergency recover guide. Trust me, you never know when your ZFS on Linux decide to stop working.

Do not update the kernel automatically

I’ve wrote an article on how to rescue your ZFS file system after updating the kernel. Please click here for details.

ZFS is not native in Linux. The whole idea of ZFS on Linux is nothing more than a brunch of modules being injected to the kernel, such that the kernel will load the ZFS at boot. This is a fantastic idea because it will not introduce the performance problem like ZFS/FUSE (running on the user land, i.e., very slow). However, there is a potential problem here. This “injection” only happens when a ZFS module (zfs-kmod) is needed to be installed or updated. During this process, the system will download the latest copy of the zfs-kmod and injecting it to the current running kernel. See the problem here?

That being said, running root (/) on ZFS in Linux is a very very bad idea. You will not be able to access anything when the ZFS is not available at the kernel level.

So we have four different situations here after hitting the update command:

Kernel has new update
Kernel has no update

zfs-kmod has new update
Yes. Your ZFS will be available after the reboot.
Yes. Your ZFS will be available after the reboot.

zfs-kmod has no update
No. Your ZFS will not be available after the reboot.
Yes. Your ZFS will be available after the reboot

In general, if you really need to update the kernel, you will need to update the kernel first, reboot to the new kernel (ZFS will be missing), and re-run the process such that ZFS module will be injected to the new kernel. Some people may recommend to uninstall the zfs-kmod and reinstall it again. Unless you have a very strong reason to use the latest kernel (e.g., you’ve got plenty of spare time), otherwise I won’t recommend doing it because the whole process is a pain.

Another thing you can do is to disable the auto update. Only update the system when there is a new update for both kernel and the zfs-kmod. Then you can update the kernel first, reboot, and then update the zfs-kmod after the reboot. However, keep in mind that you will run into some problem eventually. Many packages depend on the newer kernel, if you try to update the system, it will complaint because you will need to update the kernel first before updating those packages. You can get around by skipping the broken packages (yum update –skip-broekn).

In my settings, I simply exclude the kernel from the update. That way I only need to work with one kernel, and I know that that particular kernel knows how to handle ZFS module.

sudo nano /etc/yum.conf 

exclude=kernel*

In case you are running into trouble, i.e., ZFS is missing in the latest kernel, you can try doing the following:

Before running the following commands, make sure that you know what you are doing.


#Make sure that you reboot to the kernel you want to fix.
#Find out what is the current kernel
uname -a
Linux 3.10.0-514.2.2.el7.x86_64 #1 SMP Tue Dec 6 23:06:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

#In my example, it is:
3.10.0-514.2.2.el7.x86_64


#Basically we want to remove the following files:
ls -al /lib/modules/your_new_kernel/extra
-rw-r--r-- 1 root root 344K Dec 12 15:58 splat.ko
-rw-r--r-- 1 root root 167K Dec 12 15:58 spl.ko
-rw-r--r-- 1 root root  14K Dec 12 16:02 zavl.ko
-rw-r--r-- 1 root root  75K Dec 12 16:02 zcommon.ko
-rw-r--r-- 1 root root 2.2M Dec 12 16:02 zfs.ko
-rw-r--r-- 1 root root 130K Dec 12 16:02 znvpair.ko
-rw-r--r-- 1 root root  34K Dec 12 16:02 zpios.ko
-rw-r--r-- 1 root root 324K Dec 12 16:02 zunicode.ko

#If you have no extra modules installed other than ZFS and SPL, you can run the following:
sudo rm -Rf /lib/modules/*/extra/* 

#Otherwise just remove the files one by one.


#And we want to do the same thing to the weak-updates.
ls -al /lib/modules/your_new_kernel/weak-updates

drwxr-xr-x. 2 root root 4.0K Sep 16 10:58 .
drwxr-xr-x. 7 root root 4.0K Sep 16 10:58 ..
lrwxrwxrwx  1 root root   54 Sep 16 10:56 splat.ko -> /lib/modules/2.6.32-573.18.1.el6.x86_64/extra/splat.ko
lrwxrwxrwx  1 root root   52 Sep 16 10:56 spl.ko -> /lib/modules/2.6.32-573.18.1.el6.x86_64/extra/spl.ko
lrwxrwxrwx  1 root root   53 Feb 22  2016 zavl.ko -> /lib/modules/2.6.32-573.18.1.el6.x86_64/extra/zavl.ko
lrwxrwxrwx  1 root root   56 Feb 22  2016 zcommon.ko -> /lib/modules/2.6.32-573.18.1.el6.x86_64/extra/zcommon.ko
lrwxrwxrwx  1 root root   52 Sep 16 10:58 zfs.ko -> /lib/modules/2.6.32-573.18.1.el6.x86_64/extra/zfs.ko
lrwxrwxrwx  1 root root   56 Sep 16 10:58 znvpair.ko -> /lib/modules/2.6.32-573.18.1.el6.x86_64/extra/znvpair.ko
lrwxrwxrwx  1 root root   54 Feb 22  2016 zpios.ko -> /lib/modules/2.6.32-573.18.1.el6.x86_64/extra/zpios.ko
lrwxrwxrwx  1 root root   57 Feb 22  2016 zunicode.ko -> /lib/modules/2.6.32-573.18.1.el6.x86_64/extra/zunicode.ko



#If you have no extra modules installed other than ZFS and SPL, you can run the following:
sudo rm -Rf /lib/modules/*/weak-updates/*


#Otherwise just remove the files one by one.


#Now, let's get into the fun part. We will remove them and reinstall them.
#Don't forget to match your version.
sudo dkms remove zfs/0.6.5.8 --all
sudo dkms remove spl/0.6.5.8 --all
sudo dkms --force install spl/0.6.5.8
sudo dkms --force install zfs/0.6.5.8

And we will verify the result.

#sudo dkms status
spl, 0.6.5.8, 3.10.0-514.2.2.el7.x86_64, x86_64: installed
zfs, 0.6.5.8, 3.10.0-514.2.2.el7.x86_64, x86_64: installed
zfs, 0.6.5.8, 3.10.0-327.28.3.el7.x86_64, x86_64: installed-weak from 3.10.0-514.2.2.el7.x86_64

The Kernel Version Matters

The kernel version does matter, and I will avoid using version 2.6 or below if you don’t have a professional grade hardware, such as Xeon CPU. Here is my comment:

Hardware
Linux Kernel (v.2.6)
Linux Kernel (v.3)
FreeBSD
9 & 10

Dell Power Edge T100 II
(Intel Xeon E3-1240 V2, 8GB memory, US$450)
Stable
Stable
Stable

Dell Power Edge T320
(Intel Xeon E5-2430, 64GB memory, US$2,000)
Stable
Stable
Stable

Gaming Quality Desktop
(Intel i7-4770, 32GB memory, US$900)
Unstable
Stable
Stable

Consumer Grade Desktop
(Intel i3-540, 8GB memory, US$500)
Unstable
Stable
Stable

However, it doesn’t mean that you should always use the latest kernel. Remember one thing, always keep a copy of the previous kernel before switching to the latest one. You never know whether ZFS will work with the latest one or not. For example, I have a big trouble to get ZFS working with 2.6.32-573.7.1.el6.x86_64, which is the latest kernel available on CentOS 6.7 (as of Oct 26, 2015). I ended up switching the system to 2.6.32-573.3.1.el6.x86_64 (-1 kernel). So always test the system before making the switch.

The Hard Drive Identifier

Set up your ZFS with the unique, non-changeable hard drive identifier (e.g., /dev/disk/by-id/wwn-0x1234c567890d0aaa). Do not use the generic device id (e.g., /dev/sda). When you reboot the system, the generic device id (/dev/sda) may get changed. This will be a problem to the ZFS.

For example, when RHEL 7 names the hard drive, it will name the hard drives that are attached directly to the motherboard first, these includes USB flash drives, SD card etc. After that, it will name the hard drives that are attached to the PCIe raid card. When you boot the computer with a USB flash drive attached, and if the USB flash drive was not available at the time you set up the ZFS, this small change is good enough to mess up your ZFS.

Here is an example:

History for 'storage':
zpool create -f storage raidz /dev/disk/by-id/wwn-0x5000c500206e46d4 \
                              /dev/disk/by-id/wwn-0x5000c500205eba0d \
                              /dev/disk/by-id/wwn-0x50014ee25a9074e2 \
                              /dev/disk/by-id/wwn-0x50024e9001c19fb2

So far I only noticed this problem with low-end / consumer grade motherboard. However, this is not a problem with FreeBSD because it is smart enough to re-map the old values.

The Stability

For some odd reasons, the ZFS will be unstable or even unavailable when the I/O is heavy:

  pool: storage
 state: DEGRADED
  scan: none requested
status: One or more devices could not be opened.  Sufficient replicas exist for
        the pool to continue functioning in a degraded state.
config:

        NAME                        STATE     READ WRITE CKSUM
        storage                     DEGRADED     0     0     0
          raidz1-0                  DEGRADED     0     0     0
            wwn-0x5000c500206e46d4  ONLINE       0     0     0
            wwn-0x5000c500205eba0d  ONLINE       0     0     0
            wwn-0x50014ee25a9074e2  ONLINE       0     0     0
            wwn-0x50024e9001c19fb2  UNAVAIL      0     0     0

This kind of problem happens mainly with low-end consumer grade computer with older kernel. Once I upgraded the kernel to a newer version, the problem is gone. No hardware change is needed. Again, I’ve never experienced this kind of problem since FreeBSD 9. The only explanation I can think of is the older Linux Kernel does not support ZFS and low-end computer very well.

Load ZFS at Boot

Some Linux variants such as CentOS 7 will not load ZFS at boot (in my case, my kernel is 3.10.0-327.28.3.el7.x86_64). You will need to load the ZFS via cron job. What if the ZFS contains the files that are required by some service, e.g, your database or web server files are on ZFS? You will need to restart the service after loading the ZFS. Here is an example:

sudo nano /etc/crontab

#Example 1: Load all available ZFS pools
@reboot         root    sleep 20; zpool import -a;

#Example 2: Load all ZFS pools first, then restart the Apache, MySQL and NFS services
@reboot         root    sleep 20; zpool import -a; sleep 15; systemctl restart httpd.service && systemctl restart mariadb.service && systemctl restart nfs-server;

And yes, running ZFS on Linux is really a pain. Have fun!

–Derrick

Our sponsors:

I always wanted to find out the performance difference among different ZFS types, such as mirror, RAIDZ, RAIDZ2, RAIDZ3, Striped, two RAIDZ vdevs vs one RAIDZ2 vdev etc. So I decide to create an experiment to test these ZFS types. Before we talk about the test result, let’s go over some background information, such as the details of each design and the hardware information.

Background

Here is a machine I used for experiment. It is a consumer grade desktop computer manufactured back in 2014 (which was 3 years ago):

CPU: Intel(R) Core(TM) i7-4770 CPU @ 3.40GHz / quard cores / 8 threads
OS: CentOS Linux release 7.3.1611 (Core)
Kernel: Linux 3.10.0-514.6.1.el7.x86_64 #1 SMP Wed Jan 18 13:06:36 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Memory: 20 GB (2GB x 4)
Hard drives: 5 TB x 8 
(Every hard drive is 4k sectors, non-SSD, consumer grade, connected via a PCI-e x 16 raid card with SAS interface)
System Settings: Everything is system default. Nothing has been done to the kernel configuration.

Also, I tried to keep each test simple. Therefore I didn’t do anything special:

zpool create -f myzpool (different settings go here...)
zfs create myzpool/data

To optimize the I/O performance, the block size of the zpool is based on the physical sector of the hard drive. In my case, all of the hard drives have 4k (4096 bytes) sectors, which is translated to 2^12, therefore, the ashift value of the zpool is 12.

zdb | grep ashift
ashift: 12

To measure the write performance, I first generate a zero based file with the size of 41GB and output to the zpool directly. To measure the read performance, I read the file and output to /dev/null. Notice that the file size is very large (41GB) such that it does not fit in the arc cache memory (50% of the system memory, i.e., 10GB). Notice that the block size is the physical sector of the hard drive.

#To test the write performance:
dd if=/dev/zero of=/myzpool/data/file.out bs=4096 count=10000000

#To test the read performance:
dd if=/myzpool/data/file.out of=/dev/null bs=4096

FYI, if the block size is not specified, the result can be very different:

#Using default block size:
dd if=/myzpool/data/file.out of=/dev/null
40960000000 bytes (41 GB) copied, 163.046 s, 251 MB/s

#Using native block size:
dd if=/myzpool/data/file.out of=/dev/null bs=4096
40960000000 bytes (41 GB) copied, 58.111 s, 705 MB/s

After each test, I destroyed the zpool and created a different one. This ensures that the environment factors (such as hardware and OS) stay the same. Here is the test result. If you want to learn more about each design, such as the exact command I used for each test, the corresponding material will be available in the later section.

Notice that I used eight 5TiB hard drives (Total: 40TiB) in this test. Typically 5TiB of hard drive can store about 4.5 TB of data, that’s around 86%-90% of the advertised number, depending on which OS you are using. For example, if we use the striped design, which is the maximum possible storage capacity in ZFS, the usable space will be 8 x 5TiB x 90% = 36TB. Therefore, the following percentages will be based on 36TB rather than 40TiB.

You may notice that I use 10 disks in each diagram, while I use only 8 disks in the article here. That’s because the diagram was from my first edit. At that time I used a relative old machine, which may not reflect the modern ZFS design. The hardware and the test methods I used in the second edit is better, although both edits draw the same conclusion.

Test Result

(Sorted by speed)

No.
ZFS Type
(Click to see details)
Write Speed (MB/s)
Time Spent on Writing a 41GB File
Read Speed (MB/s)
Time Spent on Reading a 41GB File
Storage Capacity (Max: 36TB)
# of Disks Used On Data Parity
Disk Arrangement

705
58.111s
687
59.6386s
36TB (100%)
0
Striped (8)

670
61.1404s
680
60.2457s
26TB (72%)
2
RAIDZ (4) x 2

608
67.3897s
673
60.8205s
25TB (69%)
2
RAIDZ2 (8)

604
67.8107s
631
64.8782s
30TB (83%)
1
RAIDZ (8)

528
77.549s
577
70.9604s
21TB (58%)
3
RAIDZ3 (8)

473
86.6451s
598
68.4477s
18TB (50%)
4
Mirror (2) x 4

414
98.9698s
441
92.963s
18TB (50%)
4
RAIDZ(2) x 2

Striped

In this design, we use all disks to store data (i.e., zero data protection), which max out our total usable spaces to 36 TB.

#Command
zpool create -f myzpool hd1 hd2 \
                        hd3 hd4 \
                        hd5 hd6 \
                        hd7 hd8

#df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
myzpool         36T      0K     36T       0%  /myzpool 

#zpool status -v
        NAME        STATE     READ WRITE CKSUM
        myzpool     ONLINE       0     0     0
          hd1       ONLINE       0     0     0
          hd2       ONLINE       0     0     0
          hd3       ONLINE       0     0     0
          hd4       ONLINE       0     0     0
          hd5       ONLINE       0     0     0
          hd6       ONLINE       0     0     0
          hd7       ONLINE       0     0     0
          hd8       ONLINE       0     0     0

And here is the test result:

#Write Test
dd if=/myzpool/data/file.out of=/dev/null bs=4096
40960000000 bytes (41 GB) copied, 58.111 s, 705 MB/s

#Read Test
dd if=/myzpool/data/file.out of=/dev/null bs=4096
40960000000 bytes (41 GB) copied, 59.6386 s, 687 MB/s

RAIDZ x 2

In this design, we split the data into two groups. In each group, we store the data in a RAIDZ1 structure. This is similar to RAIDZ2 in terms of data protection, except that this design support up to one failure disk in each group (local scale), while RAIDZ2 allows ANY two failure disks overall (global scale). Since we use two disks for parity purpose, the usable space drops from 36TB to 26TB.

#Command
zpool create -f myzpool raidz hd1 hd2 hd3 hd4 \
                        raidz hd5 hd6 hd7 hd8

#df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
myzpool         26T      0K     26T       0%  /myzpool 

#zpool status -v
        NAME        STATE     READ WRITE CKSUM
        myzpool     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            hd1     ONLINE       0     0     0
            hd2     ONLINE       0     0     0
            hd3     ONLINE       0     0     0
            hd4     ONLINE       0     0     0
          raidz1-1  ONLINE       0     0     0
            hd5     ONLINE       0     0     0
            hd6     ONLINE       0     0     0
            hd7     ONLINE       0     0     0
            hd8     ONLINE       0     0     0


And here is the test result:

#Write Test
dd if=/dev/zero of=/storage/data/file.out bs=4096 count=10000000
40960000000 bytes (41 GB) copied, 61.1401 s, 670 MB/s

#Read Test
dd if=/storage/data/file.out of=/dev/null bs=4096
40960000000 bytes (41 GB) copied, 60.2457 s, 680 MB/s


RAIDZ2

In this design, we use two disks for data protection. This allow up to two disks fail without losing any data. The usable space will drop from 36TB to 25TB.

#Command
zpool create -f myzpool raidz2 hd1 hd2 hd3 hd4 \
                               hd5 hd6 hd7 hd8

#df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
myzpool         25T     31K     25T       0%  /myzpool 

#zpool status -v
        NAME        STATE     READ WRITE CKSUM
        myzpool     ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            hd1     ONLINE       0     0     0
            hd2     ONLINE       0     0     0
            hd3     ONLINE       0     0     0
            hd4     ONLINE       0     0     0
            hd5     ONLINE       0     0     0
            hd6     ONLINE       0     0     0
            hd7     ONLINE       0     0     0
            hd8     ONLINE       0     0     0

And here is the test result:

#Write Test
dd if=/dev/zero of=/storage/data/file.out bs=4096 count=10000000
40960000000 bytes (41 GB) copied, 67.3897 s, 608 MB/s

#Read Test
dd if=/storage/data/file.out of=/dev/null bs=4096
40960000000 bytes (41 GB) copied, 60.8205 s, 673 MB/s

RAIDZ1

In this design, we use one disk for data protection. This allow up to one disk fails without losing any data. The usable space will drop from 36TB to 30TB.

#Command
zpool create -f myzpool raidz hd1 hd2 hd3 hd4 \
                              hd5 hd6 hd7 hd8

#df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
myzpool         30T      0K     30T       0%  /myzpool 

#zpool status -v
        NAME        STATE     READ WRITE CKSUM
        myzpool     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            hd1     ONLINE       0     0     0
            hd2     ONLINE       0     0     0
            hd3     ONLINE       0     0     0
            hd4     ONLINE       0     0     0
            hd5     ONLINE       0     0     0
            hd6     ONLINE       0     0     0
            hd7     ONLINE       0     0     0
            hd8     ONLINE       0     0     0

And here is the test result:

#Write Test
dd if=/dev/zero of=/storage/data/file.out bs=4096 count=10000000
40960000000 bytes (41 GB) copied, 67.8107 s, 604 MB/s

#Read Test
dd if=/storage/data/file.out of=/dev/null bs=4096
40960000000 bytes (41 GB) copied, 64.8782 s, 631 MB/s


RAIDZ3

In this design, we use three disks for data protection. This allow up to three disks fail without losing any data. The usable space will drop from 36TB to 21TB.

#Command
zpool create -f myzpool raidz3 hd1 hd2 hd3 hd4 \
                               hd5 hd6 hd7 hd8

#df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
myzpool         21T     31K     21T     0%    /myzpool 

#zpool status -v
        NAME        STATE     READ WRITE CKSUM
        myzpool     ONLINE       0     0     0
          raidz3-0  ONLINE       0     0     0
            hd1     ONLINE       0     0     0
            hd2     ONLINE       0     0     0
            hd3     ONLINE       0     0     0
            hd4     ONLINE       0     0     0
            hd5     ONLINE       0     0     0
            hd6     ONLINE       0     0     0
            hd7     ONLINE       0     0     0
            hd8     ONLINE       0     0     0


And here is the test result:

#Write Test
dd if=/dev/zero of=/storage/data/file.out bs=4096 count=10000000
40960000000 bytes (41 GB) copied, 77.549 s, 528 MB/s

#Read Test
dd if=/storage/data/file.out of=/dev/null bs=4096
40960000000 bytes (41 GB) copied, 70.9604 s, 577 MB/s


Mirror

In this design, we use half of our disks for data protection, which makes our total usable spaces drop from 36 TB to 18 TB.

#Command
zpool create -f myzpool mirror hd1 hd2 \
                        mirror hd3 hd4 \
                        mirror hd5 hd6 \
                        mirror hd7 hd8

#df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
myzpool         18T     31K     18T       0%  /myzpool 

#zpool status -v
        NAME        STATE     READ WRITE CKSUM
        myzpool     ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            hd1     ONLINE       0     0     0
            hd2     ONLINE       0     0     0
          mirror-1  ONLINE       0     0     0
            hd3     ONLINE       0     0     0
            hd4     ONLINE       0     0     0
          mirror-2  ONLINE       0     0     0
            hd5     ONLINE       0     0     0
            hd6     ONLINE       0     0     0
          mirror-3  ONLINE       0     0     0
            hd7     ONLINE       0     0     0
            hd8     ONLINE       0     0     0
          

And here is the test result:

#Write Test
dd if=/dev/zero of=/storage/data/file.out bs=4096 count=10000000
40960000000 bytes (41 GB) copied, 86.6451 s, 473 MB/s

#Read Test
dd if=/storage/data/file.out of=/dev/null bs=4096
40960000000 bytes (41 GB) copied, 68.4477 s, 598 MB/s


RAIDZ2 x 2

In this design, we split the data into two groups. In each group, we store the data in a RAIDZ2 structure. Since we use two disks for parity purpose, the usable space drops from 36TB to 18TB.

#Command
zpool create -f myzpool raidz2 hd1 hd2 hd3 hd4 \
                        raidz2 hd5 hd6 hd7 hd8

#df -h
Filesystem     Size    Used   Avail Capacity  Mounted on
myzpool         18T      0K     18T       0%  /myzpool 

#zpool status -v
        NAME        STATE     READ WRITE CKSUM
        myzpool     ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            hd1     ONLINE       0     0     0
            hd2     ONLINE       0     0     0
            hd3     ONLINE       0     0     0
            hd4     ONLINE       0     0     0
          raidz2-1  ONLINE       0     0     0
            hd5     ONLINE       0     0     0
            hd6     ONLINE       0     0     0
            hd7     ONLINE       0     0     0
            hd8     ONLINE       0     0     0


And here is the test result:

#Write Test
dd if=/dev/zero of=/storage/data/file.out bs=4096 count=10000000
40960000000 bytes (41 GB) copied, 98.9698 s, 414 MB/s

#Read Test
dd if=/storage/data/file.out of=/dev/null bs=4096
40960000000 bytes (41 GB) copied, 92.963 s, 441 MB/s


Summary

I am not surprised that the striped layout offers the fastest writing speed and maximum storage space. The only drawback is zero data protection. Unless you mirror the data at the server level (e.g., Hadoop), or the data is not important, otherwise I won’t recommend you to use this design.

Personally I recommend to go with Striped RAIDZ, i.e., we try to make multiple RAIDZ vdev, and each vdev has no more than 5 disks. In theory, ZFS recommends the number of disks in each vdev is no more than 8 to 9 disks. Based on my experience, ZFS will slow down when it has about 30% free space left if we have too many disks in one single vdev.

So which design you should use? Here is my recommendation:

#Do you care your data?
No: Go with striped.
Yes: See below:

#How many disks do you have?
1:     ZFS is not for you.
2:     Mirror
3-5:   RAIDZ1
6-10   RAIDZ1 x 2
10-15: RAIDZ1 x 3
16-20: RAIDZ1 x 4

And yes, you can pretty much forget about RAIDZ2, RAIDZ3 and mirror if you need speed and data protection together.

So, you may ask a question, what should I do if there are more than one hard drive fail? The answer is: You need to keep an eye on the health of your ZFS pool every day. The following will be your new friend:

sudo zpool status -v

or

sudo zpool status -v | grep 'state: ONLINE'

Simply write a program to get the result from this command, and send yourself an email if there is anything go wrong. You can include the program in your cron job and have it run daily or hourly. This is my version:

#!/bin/bash

result=`sudo zpool status -x`

if [[ $result != 'all pools are healthy' ]]; then
        echo "Something is wrong."
        #Do something here such as send an email, such as sending an email via HTTP...
        /usr/bin/wget "http://example.com/send_email.php?subject=Alert&body=File%20System%20Has%20Problem" -O /dev/null > /dev/null
        exit 1;
fi

Enjoy ZFS.

–Derrick

Our sponsors:

First Edition: March 14, 2010
Last Revised: April 26, 2018

In this tutorial, I will show you how to improve the performance of your ZFS using the affordable consumer-grade hardware (e.g., Gigabit network card, standard SATA non-SSD hard drives, consumer-grade motherboard etc).

Many people found a problem on their ZFS system. The speed is slow! It is slow to read or write files to the system. In this article, I am going to show you some tips on improving the speed of your ZFS file system.

Notice that this article was originally based on ZFS on FreeBSD. Although most concepts can be applied to Linux, you may want to check out these two articles: ZFS: Linux VS FreeBSD and How to fix the mess created by ZFS on Linux after update or reboot.

This article is about how to build a single node ZFS server. If you are interested in implementing multiple-nodes ZFS system / ZFS clusters, please check here for details.

Table of Content
  1. A Good 64-bit CPU + Lots of Memory
  2. Tweaking the Boot Loader
  3. Use Disks with the Same Specifications
  4. Use a Powerful Power Supply
  5. Enable Compression
  6. Identify the Bottleneck
  7. Keep Your ZFS up to Date
  8. Understand How The ZFS Caching Works
  9. Two Drives Is Better than One Single Drive
  10. Use a Combination of Strip and RAIDZ If Speed Is Your First Concern.
  11. Distribute Your Free Space Evenly
  12. Make Your Zpool Expandable
  13. Backup the Data on a Different Machine, Not on the Same Zpool
  14. Rsync or ZFS Send?
  15. Did You Enable Dedup? Disable It!
  16. Reinstall Your Old System
  17. Connect Your Disks via High Speed Interface
  18. Do Not Use up All Spaces
  19. Use AHCI, Not IDE
  20. Refresh Your Zpool
  21. Great Performance Settings
  22. My Settings

Improve ZFS Performance: Step 1

A Good 64-bit CPU + Lots of Memory

Traditionally, we are told to use a less powerful computer for a file/data server. That’s not true for ZFS. ZFS is more than a file system. It uses a lot of resources to improve the performance of the input/output, such as compressing data on the fly. For example, suppose you need to write a 1GB file. Without enabling the compression, the system will write the entire 1GB file on the disk. With the compression being enabled, the CPU will compress the data first, and write the data on the disk after that. Since the compressed data is smaller, it takes shorter time to write to the disk, which results a higher writing speed. The same thing can be applied for reading. ZFS can cache the file for you in the memory, it will result a higher reading speed.

That’s why a 64-bit CPU and higher amount of memory is recommended. I recommended at least a Quad Core CPU with 4GB of memory (Personally I use Xeon and i7, with at least 20GB of memory).

Please make sure that the memory modules should have the same frequencies/speed. If you mix them with different speed, try to group the memories with same speed together., e.g., Channel 1 and Channel 2: 1333 MHz, Channel 3 and Channel 4: 1600 MHz.

Let’s do a test. Suppose I am going to create a 10GB file with all zero. Let’s see how long does it take to write on the disk:

#CPU: i7 920 (A 4 cores/8 threads CPU from 2009) + 24GB Memory + FreeBSD 9.3 64-bit
#dd if=/dev/zero of=./file.out bs=1M count=10k
10737418240 bytes transferred in 6.138918 secs (1749073364 bytes/sec)

That’s 1.6GB/s! Why is it so fast? That’s because it is a zero based file. After the compression, a compressed 10GB file may result in few bytes only. Since the performance of the compression is highly depended on the CPU, that’s why a fast CPU matters.

Now, let’s do the same thing on a not-so-fast CPU:

#CPU: AMD 4600 (2 cores) + 5GB Memory + FreeBSD 9.3 64-bit
#dd if=/dev/zero of=./file.out bs=1M count=10k
10737418240 bytes transferred in 23.672373 secs (453584362 bytes/sec)

That’s 434MB/s only. See the difference?

Improve ZFS Performance: Step 2

Tweaking the Boot Loader Parameters

Update: This section was written based on FreeBSD 8. As of today (February 12, 2017), the latest version is FreeBSD 11. I noticed that my FreeBSD/ZFS works very stable even without any tweaking! In the other words, you may skip this section if you are not using FreeBSD 8.2 or older.

Many people complain about ZFS for its stability issues, such as kernel panic, reboot randomly, crash when copying large files (> 2GB) at full speed etc. It may have something to do with the boot loader settings. By default, ZFS will not work smoothly without tweaking the system parameters system. Even FreeBSD (9.1 or earlier) claims that no tweaking is necessary for 64-bit system, my FreeBSD server crashes very often when writing large files to the pool. After trial and error for many times, I figure out few equations. You can tweak your boot loader (/boot/loader.conf) using the following parameters. Notice that I only tested the following on FreeBSD. Please let me know whether the following tweaks work on other operating systems.

If you experiences kernel panic, crash or something similar, it could be the hardware problem, such as memory. I encourage to test all memory modules by using Memtest86+ first. I wish someone told me about this few years ago. That would make my life a lot easier.

Warning: Make sure that you save a copy before doing anything to the boot loader. Also, if you experience anything unusual, please remove your changes and go back to the original settings.

#Assuming 8GB of memory

#If Ram = 4GB, set the value to 512M
#If Ram = 8GB, set the value to 1024M
vfs.zfs.arc_min="1024M"

#Ram x 0.5 - 512 MB
vfs.zfs.arc_max="3584M"

#Ram x 2
vm.kmem_size_max="16G"

#Ram x 1.5
vm.kmem_size="12G"

#The following were copied from FreeBSD ZFS Tunning Guide
#https://wiki.freebsd.org/ZFSTuningGuide

# Disable ZFS prefetching
# http://southbrain.com/south/2008/04/the-nightmare-comes-slowly-zfs.html
# Increases overall speed of ZFS, but when disk flushing/writes occur,
# system is less responsive (due to extreme disk I/O).
# NOTE: Systems with 4 GB of RAM or more have prefetch enabled by default.
vfs.zfs.prefetch_disable="1"

# Decrease ZFS txg timeout value from 30 (default) to 5 seconds.  This
# should increase throughput and decrease the "bursty" stalls that
# happen during immense I/O with ZFS.
# http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007343.html
# http://lists.freebsd.org/pipermail/freebsd-fs/2009-December/007355.html
# default in FreeBSD since ZFS v28
vfs.zfs.txg.timeout="5"

# Increase number of vnodes; we've seen vfs.numvnodes reach 115,000
# at times.  Default max is a little over 200,000.  Playing it safe...
# If numvnodes reaches maxvnode performance substantially decreases.
kern.maxvnodes=250000

# Set TXG write limit to a lower threshold.  This helps "level out"
# the throughput rate (see "zpool iostat").  A value of 256MB works well
# for systems with 4 GB of RAM, while 1 GB works well for us w/ 8 GB on
# disks which have 64 MB cache.

# NOTE: in v27 or below , this tunable is called 'vfs.zfs.txg.write_limit_override'.
vfs.zfs.write_limit_override=1073741824

Don’t forget to reboot your system after making any changes. After changing to the new settings, the writing speed improves from 60MB/s to 80MB/s, sometimes it even goes above 110MB/s! That’s a 33% improvement!

By the way, if you found that the system still crashes often, the problem could be an uncleaned file system.

After a system crashes, it may cause the file links to be broken (e.g., the system see the file tag, but unable to locate the files). Usually FreeBSD will automatically run fsck after the crash. However it will not fix the problem for you. In fact, there is no way to clean up the file system when the system is running (because the partition is mounted). The only way to clean up the file system is by entering the Single User Mode (a reboot is required).

After you enter the single user mode, make sure that each partition is cleaned. For example, here is my df result:

Filesystem    Size    Used   Avail Capacity  Mounted on
/dev/ad8s1a   989M    418M    491M    46%    /          
devfs         1.0k    1.0k      0B   100%    /dev       
/dev/ad8s1e   989M     23M    887M     3%    /tmp       
/dev/ad8s1f   159G     11G    134G     8%    /usr       
/dev/ad8s1d    15G    1.9G     12G    13%    /var        

Try running the following commands:

#-y: All yes
#-f: Force
fsck -y -f /dev/ad8s1a
fsck -y -f /dev/ad8s1d
fsck -y -f /dev/ad8s1e
fsck -y -f /dev/ad8s1f

These command will clean up the affected file systems.

After the clean up is done, type reboot and let the system to boot to the normal mode.

Improve ZFS Performance: Step 3

Use disks with the same specifications

A lot of people may not realize the importance of using exact the same hardware. Mixing different disks of different models/manufacturers can bring performance penalty. For example, if you are mixing a slow disk (e.g., green disk) and a fast disk(e.g., performance disk) in the same virtual device (vdev), the overall speed will depend on the slowest disk. Also, different hard drives may have different sector size. For example, Western Digital releases a harddrive with 4k sector, while the older models use 512 byte. Mixing hard drives with different sectors can bring performance penalty too. Here is a quick way to check the model of your harddrive:

sudo smartctl -a /dev/sda | grep 'Sector Size'

Here is an example output:

Sector Sizes:     512 bytes logical, 4096 bytes physical

If you don’t have enough budget to replace all disks with the same specifications, try to group the disks with similar specifications in the same vdev.

Here is an example: Support I have a group of hard drives with 4k sector (i.e., 4k = 4096 bytes = 2^12 bytes), which translate to ashift=12:

sudo zpool create myzpool -o ashift=12 raidz1 /dev/disk1 /dev/disk2 ...

You can also verify the ashift value using this command:

sudo zdb | grep shift

Let’s say we have a group of hard drives with 4k sector, we create two ZFS, one with ashift=12, and another one with the default value (ashift=9), here is the difference:

#Using default block size:
dd if=/myzpool/data/file.out of=/dev/null
40960000000 bytes (41 GB) copied, 163.046 s, 251 MB/s

#Using native block size:
dd if=/myzpool/data/file.out of=/dev/null
40960000000 bytes (41 GB) copied, 58.111 s, 705 MB/s

Improve ZFS Performance: Step 4

Use a Powerful Power Supply

I recently built a Linux-based ZFS file system with 12 hard disks. For some reasons, it was pretty unstable. When I tried filling the pool with 12TB of data, the ZFS system crashed randomly. When I rebooted the machine, the error was gone. However, when I resumed the copy process again, the error happened on a different disk. In short, the disks failed randomly. When I checked the dmesg, I found something like the following. Since I didn’t have the exact copy, I grabbed something similar from the web:

[  412.575724] ata10.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6
[  412.576452] ata10.00: BMDMA stat 0x64
[  412.577201] ata10.00: failed command: WRITE DMA EXT
[  412.577897] ata10.00: cmd 35/00:08:97:19:e4/00:00:18:00:00/e0 tag 0 dma 4096 out
[  412.577901]          res 51/84:01:9e:19:e4/84:00:18:00:00/e0 Emask 0x10 (ATA bus error)
[  412.579294] ata10.00: status: { DRDY ERR }
[  412.579996] ata10.00: error: { ICRC ABRT }
[  412.580724] ata10: soft resetting link
[  412.844876] ata10.00: configured for UDMA/133
[  412.844899] ata10: EH complete
[  980.304788] ata10.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6
[  980.305311] ata10.00: BMDMA stat 0x64
[  980.305817] ata10.00: failed command: WRITE DMA EXT
[  980.306351] ata10.00: cmd 35/00:08:c7:00:ce/00:00:18:00:00/e0 tag 0 dma 4096 out
[  980.306354]          res 51/84:01:ce:00:ce/84:00:18:00:00/e0 Emask 0x10 (ATA bus error)
[  980.307425] ata10.00: status: { DRDY ERR }
[  980.307948] ata10.00: error: { ICRC ABRT }
[  980.308529] ata10: soft resetting link
[  980.572523] ata10.00: configured for UDMA/133

Basically, this message means the disk was failed during writing the data. Initially I thought it could be the SMART/bad sectors. However since I observed the problem happend randomly on random disks, I think the problem could be something else. I have tried to replacing the SATA cable, power cable etc. None of them worked. Finally I upgraded my power supply (450W to 600W), and the error was gone.

FYI, here is the specs of my affected system. Notice that I didn’t use any component that required high-power such as graphic card etc.

  • CPU: Intel Q6600
  • Standard motherboard
  • PCI RAID Controller Card with 4 SATA ports
  • WD Green Drive x 12
  • CPU fan x 1
  • 12″ case fan x 3

And yes, you will need a 600W power supply for such a simple system. Also, another thing worth to check is the power cable. Sometimes, using 15 pin power cable (the one for SATA drive) is better than 4pin to 15 pin converter (IDE to SATA converter).

Improve ZFS Performance: Step 5

Compression

ZFS supports compressing the data on the fly. This is a nice feature that improves the I/O speed – only if you have a high speed CPU (such as Quad core or higher). If your CPU is not fast enough, I don’t recommend you to turn on the compression feature, because the benefit from reducing the file size is smaller than the time spent on the CPU computation. Also, the compression algorithm plays an important role here. ZFS supports two compression algorithms, LZJB and GZIP. I personally use lz4 (See LZ4 vs LZJB for more information) because it gives a better balance between the compression radio and the performance. You can also use GZIP and specify your own compression ratio (i.e., GZIP-N). FYI, I tried GZIP-9 (The maximum compression ratio available) and I found that the overall performance gets worse even on my i7 with 12GB of memory.

There is no solid answer here because it all depends on what kind of files you store. Different files such as large file, small files, already compressed files (such as Xvid movie) need different compression settings.

If you cannot decide, just go with lz4. It can’t be wrong:


#Try to use lz4 first.
sudo zfs set compression=lz4 mypool

#If you system does not support lz4, try to use lzjb
sudo zfs set compression=lzjb mypool

Improve ZFS Performance: Step 6

Identify the bottle neck

Sometimes, the limit of the ZFS I/O speed is closely related to the hardware. For example, I set up a network file system (NFS) that is based on ZFS. Most of the time, I mainly transfer the data in between the servers, rather than within the same server. Therefore, the maximum I/O speed I could get is the capacity of my network card, which is 125MB/s. On average, I can reach to 100-110 MB/s, which is pretty good for consumer grade network adapter.

One day, I decide to explore the options on network bonding, which combines multiple network adapters together on the same server. In theory, it multiplies the bandwidth, and it will increase the ZFS I/O speed.

I set up network bonding on a machine with two network cards. It was a CentOS 7 box with bonding mode: 6 (adaptive load balancing). I was able to double the I/O speed in both network and ZFS.

sudo cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: adaptive load balancing
Primary Slave: None
Currently Active Slave: em2
MII Status: up
MII Polling Interval (ms): 1
Up Delay (ms): 0
Down Delay (ms): 0

Slave Interface: em1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 20:27:47:91:a3:a8
Slave queue ID: 0

Slave Interface: em2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: 20:27:47:91:a3:aa
Slave queue ID: 0

Improve ZFS Performance: Step 7

Keep your ZFS up to date

By default, ZFS will not update the file system itself even if a newer version is available on the system. For example, I created a ZFS file system on FreeBSD 8.1 with ZFS version 14. After upgrading to FreeBSD 8.2 (which supports ZFS version 15), my ZFS file system was still on version 14. I needed to upgrade it manually using the following commands:

sudo zfs upgrade my_pool
sudo zpool upgrade my_pool

Improve ZFS Performance: Step 8

Understand How The ZFS Caching Works

ZFS has three types of cache, ARC and L2ARC. ARC is a ram-based cache, and L2ARC is disk-based cache. If you want to have a super-fast ZFS system, you will need A LOT OF memory. How much? I have a RHEL 7 based data center running NFS on top of a ZFS file system. Its the storage capacity is 48TB (8 x 8TB, running RAIDZ1), and I have 96GB of memory. The I/O is about 1.5TB a day. Is the memory too much? I can tell you that sometimes it causes kernel problem because of running out of memory.

In general, you will need lots of memory for ARC cache (ram-based), and L2ARC (disk-based) is optional. Depending on which operating system you are using (in my case, I use RHEL 7 and FreeBSD), the settings can be very different. Let’s talk about the arc cache first.

By default, when you read a file from ZFS for the first time, ZFS will read the file from the disk. However if it is frequently used, ZFS will put the file in the ARC cache, which is your memory. Now we have something interesting. How much memory should you allow ZFS to use for ARC caching? You don’t want to use too little because it will increase accessing the disk, which will slow down the system. On the other hand, you don’t want to use too much because you want to reserve the memory for your operating system and other services. For FreeBSD, depend on which version you are using, it will either use 90% of the memory for ARC, or use all but 1GB of memory. For ZFS on Linux with RHEL 7, it will use about 50% of the memory.

You can monitor the ARC usuage here:

#FreeBSD
zfs-stats -A

------------------------------------------------------------------------
ZFS Subsystem Report                            Sun Feb 12 22:34:19 2017
------------------------------------------------------------------------

ARC Summary: (HEALTHY)
        Memory Throttle Count:                  0

ARC Misc:
        Deleted:                                247.52m
        Recycle Misses:                         0
        Mutex Misses:                           68.86k
        Evict Skips:                            778.88k

ARC Size:                               75.04%  16.76   GiB
        Target Size: (Adaptive)         74.94%  16.74   GiB
        Min Size (Hard Limit):          12.50%  2.79    GiB
        Max Size (High Water):          8:1     22.33   GiB

ARC Size Breakdown:
        Recently Used Cache Size:       73.41%  12.30   GiB
        Frequently Used Cache Size:     26.59%  4.46    GiB

ARC Hash Breakdown:
        Elements Max:                           1.40m
        Elements Current:               68.50%  956.10k
        Collisions:                             50.63m
        Chain Max:                              7
        Chains:                                 94.61k

------------------------------------------------------------------------



#ZFS on Linux
arcstat.py
    time  read  miss  miss%  dmis  dm%  pmis  pm%  mmis  mm%  arcsz     c
22:33:36     0     0      0     0    0     0    0     0    0    53G   53G


If you are using FreeBSD (the most advanced operating system in the world), you don’t need to tweak your system settings at all. ZFS is part of the FreeBSD kernel, and FreeBSD has excellent memory management. Therefore, you do not need to worry about ZFS using too much memory of your system. In fact, I’ve never crashed a FreeBSD system because of using too much memory. If you know how to do it, please show me how to do it.

If you want to tweak the ARC size, you can do it via /boot/loader.conf:

#Min: 10GB
vfs.zfs.arc_min="10000M"

#Max: 16GB
vfs.zfs.arc_max="16000M"

If you are using Linux, you may want to do some extra work to make your system stable. The default settings will get you to bottom of the mountain, and you will need to do some climbing to reach the peak.

First, you want to check the arc size:

arcstat.py

Now you will need to think about how much memory you want to reserve for ZFS. This is a tough question, and you can’t really expect Linux works like FreeBSD. Remember, Linux does not have a good memory management system like FreeBSD. It is very easy to crash the system if you do something wrong. That is one of the reasons why ZFS on Linux uses only 50% of the memory, because they know Linux, a car that may lose control if it is running slightly above the highway speed limit.

sudo nano /etc/modprobe.d/zfs.conf

#Min: 4GB
options zfs zfs_arc_min=4000000000

#Max: 8GB
options zfs zfs_arc_max=8000000000

Don’t forget to reboot the server.

FYI, tweaking the memory usage of ZFS ARC is more than entering some numbers. You need to think of the whole picture. Memory is a very valuable resource. You will need to decide how do use them effectively and efficiently. You can’t expect a single server running web server, database, NFS, virtual machine host all together on top of a ZFS system, and giving you a very high performance at the mean time. You need to divide the load across multiple servers. For example, here is what I use at work for production:

Server 1: A NFS host
– Running NFS only
– Have network bonding based on multiple network interfaces. This will multiply the bandwidth.
– Support other servers such as web server, databases, virtual machine host etc.

Server 2: A virtual machine host
– Running Virtual Box
– Virtual machine which has higher I/O requirement are hosted in a local SSD drive with standard partitions (non-ZFS).
– Virtual machine which has lower I/O requiremenet are hosted on a different ZFS server, connected via NFS.
– All of the system memory will be used for running virtual machines.

Server 3: A web server
– Running Apache + PHP + MySQL
– The PHP code is hosted locally.
– The large static files is hosted on the NFS server. In my case, I have over 20TB static content.
– All of the system memory will be used for running Apache / PHP / MySQL executions.

Of course, I am not talking about ZFS does not work well with Apache / PHP / MySQL. I am just saying that in some extreme environments, the services should be hosted across multiple machines instead of one single machine.

Now if you have extra resources (budget and extra SATA port), you may consider using L2ARC cache. Basically L2ARC cache is a very fast hard drive (i.e., SSD). Think of it as a hybrid hard drive, it is a buffer for reading and writing.

So what is the role of L2ARC to ARC? In general, the most frequently used files are stored in the ARC cache (ram). For the less frequently used files, or if the files are too large to fit in the arc cache, they will be stored in the L2ARC cache.

To improve the reading performance:

sudo zpool add myzpool cache 'ssd device name'

To improve the writing performance:

sudo zpool add myzpool log /dev/ssd_drive

It was impossible to remove the log devices without losing the data until ZFS v.19 (FreeBSD 8.3+/9.0+). I highly recommend to add the log drives as a mirror, i.e.,

sudo zpool add myzpool log mirror /dev/log_drive1 /dev/log_drive2

Now you may ask a question. How about using a ram disk as log / cache devices? First, ZFS already uses your system memory for I/O, so you don’t need to set up a dedicated ram disk by yourself. Also, using a ram disk for log (writing) devices is not a good idea. When somethings go wrong, such as power failure, you will end up losing your data during the writing.

Improve ZFS Performance: Step 9

Two drives is better than one single drive

Do you know ZFS works faster on multiple devices pool than single device pool, even they have the same storage size?

Improve ZFS Performance: Step 10

Use a combination of Striped and RAIDZ if speed is your first concern.

Striped design also gives the best performance. Since it offers no data protection at all, you may want to use RAIDZ (RAIDZ1, RAIDZ2, RAIDZ3) or mirror to handle the data protection. However, there are too many choices and each of them offer different degree of performance and protection level. If you want a quick answer, try to use a combination of striped and RAIDZ. I posted a very detail of comparison among Mirror, RAIDZ, RAIDZ2, RAIDZ3 and Striped here.

Here is an example of striped with RAIDZ:

#Command
zpool create -f myzpool raidz hd1 hd2 hd3 hd4 hd5 \
                        raidz hd6 hd7 hd8 hd9 hd10

#zpool status -v
        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            hd1     ONLINE       0     0     0
            hd2     ONLINE       0     0     0
            hd3     ONLINE       0     0     0
            hd4     ONLINE       0     0     0
            hd5     ONLINE       0     0     0
          raidz1-1  ONLINE       0     0     0
            hd6     ONLINE       0     0     0
            hd7     ONLINE       0     0     0
            hd8     ONLINE       0     0     0
            hd9     ONLINE       0     0     0
            hd10    ONLINE       0     0     0

Improve ZFS Performance: Step 11

Distribute your free space evenly

One of the important tricks to improve ZFS performance is to keep the free space evenly distributed across all devices.

You can check it using the following command:

zpool iostat -v

The free space is show on the second column (available capacity)

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storage     3.23T  1.41T      0      3  49.1K   439K
  ad4        647G   281G      0      0  5.79K  49.2K
  ad8        647G   281G      0      0  5.79K  49.6K
  ad10       647G   281G      0      0  5.82K  49.6K
  ad16       647G   281G      0      0  5.82K  49.6K
  ad18       647G   281G      0      0  5.77K  49.5K

When ZFS writes a new file to replace the old file in the system, it will first write the file in the free space first, then move the file pointer from the old one to the new one. In this case, even there is a power failure during writing the data, no data will be lost because the file pointer is still pointing to the old file. That’s why ZFS does not need fsck (file system check).

In order to keep the performance at a good level, we need to make sure that the free space is available in every device in the pool. Otherwise ZFS can only write the data to some of the devices only (instead of all). In the other words, the higher number of devices ZFS write, the better the performance.

Technically, if the structure of a zpool has not been modified or alternated, you should not need to worry about the free space distribution because ZFS will take care of that for you automatically. However, when you add a new device to an existing zpool, that will be a different story, e.g.,

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storage     3.88T  2.33T      0      3  49.1K   439K
  ad4        647G   281G      0      0  5.79K  49.2K
  ad8        647G   281G      0      0  5.79K  49.6K
  ad10       647G   281G      0      0  5.82K  49.6K
  ad16       647G   281G      0      0  5.82K  49.6K
  ad18       647G   281G      0      0  5.77K  49.5K
  ad20          0   928G      0      0  5.77K  49.5K

In this example, I add a 1TB hard drive (ad20) to my existing pool, which gives about 928GB of free space. Let say I add a 6GB file, the free space will look something like this:

               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storage     4.48T  1.73T      0      3  49.1K   439K
  ad4        648G   280G      0      0  5.79K  49.2K
  ad8        648G   280G      0      0  5.79K  49.6K
  ad10       648G   280G      0      0  5.82K  49.6K
  ad16       648G   280G      0      0  5.82K  49.6K
  ad18       648G   280G      0      0  5.77K  49.5K
  ad20         1G   927G      0      0  5.77K  49.5K

In the other words, ZFS will still divide my 6GB file into six equal pieces and write each piece to each device. Eventually, ZFS will use up the free space in the older devices, and it can write the data to the new devices only (ad20), which will decrease the performance. Unfortunately, there is no way to redistribute the data / free space evenly without destroying the pool, i.e.,

1. Back up your data
2. Destroy the pool
3. Rebuild the pool
4. Put your data back

Depending on how much data do you have, it can take 2 to 3 days to copy 10TB of data from one server to another server over a gigabit network. You don’t want to use scp to do it because you will need to re-do everything again if the process is dropped. In my case, I use rsync:

(One single line)

#Run this command on the production server:
rsync -avzr --delete-before backup_server:/path_to_zpool_in_backup_server /path_to_zpool_in_production_server

Of course, netcat is a faster way if you don’t care about the security. (scp / rsync will encrypt the data during transfer).

See here for further information

Improve ZFS Performance: Step 12

Make your pool expandable

Setting up a ZFS system is more than a one-time job. Unless you take a very good care of your storage like how supermodels monitor their body weights, otherwise you will end up using all of the available space one day. Therefore it is a good idea to come up a good design that it can grow in the future.

Suppose we want to build a server with maximum storage capacity, how will we start? Typically we try to put as many hard drives on a single machine as possible, i.e., it will be be around 12 to 14 hard drives, which is what a typical consumer grade full tower computer case can hold. Let’s say we have 12 disks, here are couple setups which maximize storage capacity with a decent level of data safety:

In this design, we create a giant pool and let the ZFS to take care of the rest. This pool will offer n-2 storage capacity which will allow up to 2 hard drives fail without losing any data.

In the second design, it offers the same level of storage capacity and a similar level of data protection. It allows up to one failure disk in each vdev. Keep in mind that the first design offers a great data protection. However, the second design will offer a better performance and greater flexibility in terms of future upgrade. Check out this article if you want to learn more about the difference in ZFS design.

First, let’s talk about the good and bad of the first design. It offers a great data security because it allows ANY two disks in the zpool to fail. However, it has couple disadvantages. ZFS works great when the number of disk of vdev is small. Ideally, the number should be smaller than 8 (Personally, I will stick with 5). In the first design, we put 12 disks in one single vdev, which will be problematic when the storage is getting full (>90%). Also, when we talks about upgrading the entire zpool, we will need to upgrade each disk one by one first. We won’t be able to use the extra space until we replace all 12 disks. This may be an issue for those who do not have budget to get 12 new disks at a time.

For the second design, it does not have the problem mentioned in the first design. The number of disk in each vdev is small (6 disks in each vdev). For those who don’t have plenty of budgets, it is okay to get six disks at a time to expand the pool.

Here is how to create the second design:

sudo zpool create myzpoolname raidz /dev/ada1 /dev/ada2 ... /dev/ada6 raidz /dev/ada7 /dev/ada8 /dev/ada9 ... /dev/ada12

Here is how to expand the pool by replacing the hard drive one by one without losing any data:

1. Shutdown the computer, replace the hard drive and turn on the computer.

2. Tell ZFS to replace the hard drive. This will force it to fill in the new hard drive with the existing data based on the check sum.
zpool replace mypool /dev/ada1

3. Resilver the pool
zpool scrub mypool 

4 Shutdown the server and replace the second ard drive again. Repeat the steps until everything is done.

5. zpool set autoexpand=on mypool 

6. Resilve the pool if needed.
zpool scrub mypool 

Improve ZFS Performance: Step 13

Backup your data on a different machine, not on the same pool

ZFS comes with a very cool feature. It allows you to save multiple copies of the same data in the same pool. This adds an additional layer on data security. However, I don’t recommend using this feature for backup purpose because it adds more work when writing the data to the disks. Also, I don’t think this is a good way to secure the data. I prefer to set up a mirror on a different server (Master-Slave). Since the chance of two machines fail at the same time is much smaller than one machine fails. Therefore the data is safer in this settings.

Here is how I synchronize two machines together:

(Check out this guide on how to use rsyncd)

Create a script in the slave machine: getContentFromMaster.sh
(One single line)

rsync -avzr -e ssh --delete-before master:/path/to/zpool/in/master/machine /path/to/zpool/in/slave/machine

And put this file in a cronjob, i.e.,
/etc/crontab

@daily root /path/to/getContentFromMaster.sh

Now, you may ask a question. Should I go with strip-only ZFS (i.e., stripping only. No mirror, RAIDZ, RAIDZ2) when I set up my pool? Yes or no. ZFS allows you to mix any size of har ddrive in one single pool. Unlike RAID[0,1,5,10] and concatenation, it can be any size and there is no lost in the disk space, i.e., you can connect 1TB, 2TB, 3TB into one single pool while enjoying the data-stripping (Total usable space = 6TB). It is fast (because there is no overhead such as parity etc) and simple. The only down side is that the entire pool will stop working if at least one device fails.

Let’s come back to the question, should we employ simple stripping in production environment? I prefer not. Strip-only ZFS divides all data into all vdev. If each vdev is simply a hard drive, and if one fails, there is NO WAY to get the original data back. If something screws up in the master machine, the only way is to destroy and rebuild the pool, and restore the data from the backup. (This process can takes hours to days if you have large amount of data, say 6TB.) Therefore, I strongly recommend to use at least RAIDZ in the production environment. If one device fails, the pool will keep working and no data is lost. Simply replace the bad hard drive with a good one and everything is good to go.

To minimize the downtime when something goes wrong, go with at least RAIDZ in a production environment (ideally, RAIDZ or strip-mirror).

For the backup machine, I think using simple stripping is completely fine.

Here is how to build a pool with simple stripping, i.e., no parity, mirror or anything

zpool create mypool /dev/dev1 /dev/dev2 /dev/dev3

And here is how to monitor the health

zpool status

Some websites suggest to use the following command instead:

zpool status -x

Don’t believe it! This command will return “all pools are healthy” even if one device is failed in a RAIDZ pool. In the other words, your data is healthy doesn’t mean all devices in your pool are healthy. So go with “zpool status” at any time.

FYI, it can easily takes few days to copy 10TB of data from one machine to another through a gigabit network. In case you need to restore large amount of data through the network, use rsync, not scp. I found that scp sometimes fail in the middle of transfer. Using rsync allows me to resume it at any time.

Improve ZFS Performance: Step 14

rsync or ZFS send?

So what’s the main difference between rsync and ZFS send? What’s the advantage of one over the other?

Rsync is a file level synchronization tool. It simply goes through the source, find out which files have been changed, and copy the corresponding files to the destination. Also rsync is portable/cross-platform. Unlike ZFS, rsync is available in most Unix platforms. If your backup platform does not support ZFS, you may want to go with rsync.

ZFS send is doing something similar. First, it takes a snapshot on the ZFS pool first:

zfs snapshot mypool/vdev@20120417

After that, you can generate a file that contains the pool and data information, copy to the new server to restore it:

#Method 1: Generate a file first
zfs send mypool/vdev@20120417 > myZFSfile
scp myZFSfile backupServer:~/
zfs receive mypool/vdev@20120417 < ~/myZFSfile

Or you can do everything in one single command line:

#Method 2: Do everything over the pipe (One command)
zfs send pool/vdev@20120417 | ssh backupServer zfs receive pool/vdev@20120417

In general, the preparation time of ZFS send is much shorter than rsync, because ZFS already knows which files have been modified. Unlike rsync, a file-level tool, ZFS send does not need to go though the entire pool and find out such information. In terms of the transfer speed, both of them are similar.

So why do I prefer rsync over ZFS send (both methods)? It's because the latter one is not practical! In method #1, the obvious issue is the storage space. Since it requires generating a file that contains your entire pool information. For example, suppose your pool is 10TB, and you have 8TB of data (i.e., 2TB of free space), if you go with method #1, you will need another 8TB of free space to store the file. In the other words, you will need to make sure that at least 50% of free space is available all the time. This is a quite expensive way to run ZFS.

What about method #2? Yes, it does not have the storage problem because it copies everything over the pipe line. However, what if the process is interrupted? It is a common thing due to high traffic in the network, high I/O to the disk etc. Worst worst case, you will need to re-do everything again, say, copying 8TB over the network, again.

rsync does not have these two problems. In rsync, it uses relatively small space for temporary storage, and in case the rsync process is interrupted, you can easily resume the process without copying everything again.

Improve ZFS Performance: Step 15

Disable dedup if you don't have enough memory (5GB memory per 1TB storage)

Deduplication (dedup) is a space-saving technology. It works at the block level (a file can have many blocks). To explain it in simple English, if you have multiple copies of the same file in different places, it will store only one copy instead of multiple copies. Notice that dedup is not the same as compression. Check out this article: ZFS: Compression VS Deduplication(Dedup) in Simple English if you wan to learn more.

The idea of dedup is very simple. ZFS maintains an index of your files. Before writing any incoming files to the pool, it checks whether the storage has a copy of this file or not. If the file already exists, it will skip the file. With dedup enabled, instead of store 10 identical files, it stores one only copy. Unfortunately, the drawback is that it needs to check every incoming file before making any decision.

After upgrading my ZFS pool to version 28, I enabled dedup for testing. I found that it really caused huge performance hit. The writing speed over the network dropped from 80MB/s to 5MB/s!!! After disabling this feature, the speed goes up again.

sudo zfs set dedup=off your-zpool

In general, dedup is an expensive feature that requires a lot of hardware resources. You will need 5GB memory per 1TB of storage (Source). For example, if zpool is 10TB, I will need 50GB of memory! (Which I only have 12GB). Therefore, think twice before enabling dedup!

Notice that it won't solve all the performance problem by disabling the dedup. For example, if you enable dedup before and disable it afterward, all files stored during this period are dedup dependent, even dedup is disabled. When you need to update these files (e.g., delete), the system still needs to check again the dedup index before any processing your file. Therefore, the performance issue still exists when working with these affected files. For the new files, it should be okay. Unfortunately, there is no way to find out the affected dedup files. The only way is to destroy and re-build the ZFS pool, which will clear the list of dedup files.

Improve ZFS Performance: Step 16

Reinstall Your Old System

Sometimes, reinstalling your old system from scratch may help to improve the performance. Recently, I decided to reinstall my FreeBSD box. It was an old FreeBSD box that was started with FreeBSD 6 (released in 2005, about 8 years ago from today). Although I upgraded the system every release, it already accumulated many junk and unused files. So I decide to reinstall the system from scratch. After the installation, I can tell that the system is more responsive and stable.

Before you wipe out the system, you can export the ZFS tank using the following command:

sudo zpool export mypool

After the work is done, you can import the data back:

sudo zpool import mypool

Improve ZFS Performance: Step 17

Connect your disks via high speed interface

Recently, I found that my overall ZFS system is slow no matter what I have done. After some investigations, I noticed that the bottle neck was my RAID card. Here are my suggestions:

1. Connect your disks to the ports with highest speed. For example, my PCI-e RAID card deliveries higher speed than my PCI RAID card. One way to verify the speed is by using dmesg, e.g.,

dmesg | grep MB

#Connected via PCI card. Speed is 1.5Gb/s
ad4: 953869MB  at ata2-master UDMA100 SATA 1.5Gb/s

#Connected via PCI-e card. Speed is 3.0 Gb/s
ad12: 953869MB  at ata6-master UDMA100 SATA 3Gb/s

In this case, the overall speed limit is based on the slowest one (1.5Gb/s), even the rest of my disks are 3Gb/s.

2. Some RAID cards come with some advanced features such as RAID, linear RAID, compression etc. Make sure that you disable these features first. You want to minimize the workload of the card and maximize the I/O speed. It will only slow down the overall process if you enable these additional features. You can disable the settings in the BIOS of the card. FYI, most of the RAID cards in $100 ranges are "software RAID", i.e., they are using the system CPU to do the work. Personally, I think these fancy features are designed for Windows users. You really don't need any of these features in Unix world.

3. Personally, I recommend any brand except Highpoint Rocketraid because of the driver issues. Some of the Highpoint Rocketraid products are not supported by FreeBSD natively. You will need to download the driver from their website first. Their driver is version-specified, e.g., they have two different set of drivers for FreeBSD 7 and 8, and both of them are not compatible with each other. One day if they decide to stop supporting the device, then you either need to stick with the old FreeBSD, or buy a new card. My conclusion: Stay away from Highpoint Rocketraid.

Improve ZFS Performance: Step 18

Do not use up all spaces

Depending on the settings / history of your zpool, you may want to maintain the free space at a certain level to avoid speed-drop issues.

Recently, I found that my ZFS system is very slow in terms of reading and writing. The speed dropped from 60MB/s to 5MB/s over the network. After some investigations, I found that the available space was around 300GB (out of 10TB), which is 3% left. Someone suggest that the safe threshold is about 10%, i.e., the performance won't be impacted if you have at least 10% of the free space. I would say 5% is the bottom line, because I haven't noticed any performance issues until it hits 3%.

After I free up some spaces, the speed comes back again.

I think it doesn't make any sense not to use all of my space. So I decide to find out what caused this problem. The answer is the zpool structure.

In my old setup, I put set up a single RAIDZ vdev with 8 disks. This gives me basic data security (up to one disk fails), and maximum disk spaces (Usable space is 7 disks). However, I notice that the speed drops a lot when the available free space was 5%.

In my experiment setup, I decide to do the same thing with RAIZ2, i.e., it allows up to two disks fail, and the usable space is down to 6 disks. After filling up the pool, I found that it does not have the speed-drop problem. The I/O speed is still fast even the free space is 10GB (That's 0.09%).

My conclusion: RAIDZ is okay up to 6 devices. If you want to add more devices, either use RAIDZ2 or split them into multiple vdevs:

#Suppose I have 8 disks (/dev/hd1 ... /dev/hd8).

#One vdev
zpool create myzpool raidz2 /dev/hd1 /dev/hd2 ... /dev/hd8

#Two vdevs
zpool create myzpool raidz /dev/hd1 ... /dev/hd4 raidz /dev/hd5 ... /dev/hd8

Improve ZFS Performance: Step 19

Use AHCI, Not IDE

Typically there is a setting to control how the motherboard interacts with the hard drives: IDE or AHCI. If your motherboard has IDE ports (or manufactured before 2009), it is likely that the default value is set to IDE. Try to change to AHCI. Believe me, this litter tweak can save you countless of hours on debugging.

FYI, here is my whole story.

Improve ZFS Performance: Step 20

Refresh your pool

I had set up my zpool for five years. Over the past five years, I had performed lots of upgrade and changed a lot of settings. For example, during the initial set up, I didn't enable the compression. Later, I set the compression to lzjb and changed it to lz4. I also enabled and disabled the dedup. So you can imagine some part of the data is compressed using lzjb, some data has dedup enabled. In short, the data in my zpool has all kind of different settings. That's dirty.

The only thing I can clean up is to destroy the entire zpool and rebuild the whole thing. Depending on the size of your data, it can take 2-3 days to transfer 10TB of data from one to another server, i.e., 4-6 days round trip. However, you will see the performance gain in long run.

I recommend cleaning the pool every 3 years.

Improve ZFS Performance: Step 21

Great performance settings

The following settings will greatly improve the performance of your ZFS pool. However, each of them comes with a price tag. Its like removing the air bag from your car. Yes, it will save few pounds here and few pounds there. However, when something goes wrong, it could be a nightmare. Do it on your own risk.

Disable the sync option

sudo zfs set sync=disabled mypool

From the man page:

"File system transactions are only committed to stable storage periodically. This option will give the highest performance. However, it is very dangerous as ZFS would be ignoring the synchronous transaction demands of applications such as databases or NFS. Administrators should only use this option when the risks are understood."

Disable the checksum

sudo zfs set checksum=off mypool

Disabling the checksum will save some CPU computation time, but won't be a lot for each operation. However if you have a lot of data to write to the pull, saving a little bit in every write operation will end up a lot.

Disable the access time

sudo zfs set atime=off mypool

From the man page:

"Turning this property off avoids producing write traffic when reading files and can result in significant performance gains, though it might confuse mailers and other similar utilities."

Customizd the record size

Instead of creating one single filesystem to store everything, I recommend to create multiple filesystems for different purposes. For example, I usually create a seperate filesystem for MySQL databases with MyISAM engine, i.e.,

sudo zfs create -o recordsize=8k mypool/mysql_myisam

Of course, the drawback will be using more space comparing to the default settings.

Redundant Metadata

Set the redundant_metadata to most will improve the performance of random writes.

sudo zfs set redundant_metadata=most mypool

Enable the system attribute based xattrs

Storing xattrs as system attributes significantly decreases the amount of disk I/O.

sudo zfs set xattr=sa mypool

Improve ZFS Performance: Step 22

My Settings - Simple and Clean

I have set up over 50 servers based on ZFS. They all have different purposes. In general, they all share the same ZFS settings, and they can reach the hardware limit, e.g., ARC cache uses about 90% of the system memory, the network transfer speed max the limit of the network card etc.

sudo zpool history

#I like to use raidz, and each raidz vdev contains no more than 5 disks.
zpool create -f storage raidz /dev/hd1 /dev/hd2 ... raidz /dev/hd6 /dev/hd7 ... raidz /dev/hd11 /dev/hd12

#A partition for general purposes
zfs create storage/data

#A partition for general web
zfs create storage/web

#A partition for MySQL / MYISAM tables
zfs create -o recordsize=8k storage/mysql (MYISAM tables)

#Some common settings
zfs set compression=lz4 storage
zfs set atime=off storage
zfs set redundant_metadata=most storage

#For Linux server
zfs set xattr=sa storage

A simple nload when running rsync over a gigabit LAN:

(Bonding mode: 6, based on two gigabit network cards)
#nload -u M

Outgoing:
Curr: 209.48 MByte/s
Avg: 207.50 MByte/s
Min: 198.66 MByte/s
Max: 209.53 MByte/s
Ttl: 3755.46 GByte

I only uses two operating systems:

#Personal
FreeBSD 10.3-RELEASE-p11 FreeBSD 10.3-RELEASE-p11 #0: Mon Oct 24 18:49:24 UTC 2016     root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64


#Work
CentOS Linux release 7.3.1611 (Core)
Linux 3.10.0-514.6.1.el7.x86_64 #1 SMP Wed Jan 18 13:06:36 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

I use only consumer-grade hard drives. Typically I buy the external one, and I remove the hard drive from the enclosures, because the cost is cheaper. Sometimes when I need to use all of the SATA ports, I will use PATA drive / USB flash drives for the operating systems. As long as all of the frequently used files are in the ZFS pool, the performance is not a problem. Another thing you can is to install card size SSD drive on the mSATA port, and connect a regular hard drive to the eSATA port on the back of the motherboard using a long eSATA-SATA cable. That will help to maximize the number of hard drives have. Most computer cases will fit 10-12 3.5" hard drives, some of them can go up to 15 for under USD 100. The most powerful one will fit 18 hard drives with a price tag of USD 200.

The amount of the memory is going to be tricky, as it depends on what applications you want to run on your server and how much memory will be consumed by the service. It also depends on how large is your ZFS pool capacity. Here are some of the server I have set up:

#A light weight web (Apache+MySQL+PHP), and file server (Samba)
CPU: Intel(R) Xeon(R) CPU E5-2440 v2 @ 1.90GHz (A mid-level server grade CPU from 2014)
Memory: 4GB
ZFS Pool Capacity: 1.6TB
OS: CentOS 7
Kernel: 3.10.0-514.6.1.el7.x86_64 #1 SMP Wed Jan 18 13:06:36 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux


#A server for nightly backup
CPU: Intel(R) Core(TM) i7 CPU K 875  @ 2.93GHz (A gaming-grade CPU from 2010)
Memory: 8GB
ZFS Pool Capacity: 43TB
OS: CentOS 7
Kernel: 3.10.0-514.6.1.el7.x86_64 #1 SMP Wed Jan 18 13:06:36 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux


#A server for analyzing genomic data, with a very high CPU usage
CPU: Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz (A gaming-grade CPU from 2015)
Memory: 64GB
ZFS Pool Capacity: 8.5TB
OS: CentOS 7
Kernel: 3.10.0-514.6.1.el7.x86_64 #1 SMP Wed Jan 18 13:06:36 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux


#A heavy weight network file server
CPU: Intel(R) Xeon(R) CPU E5-2430 0 @ 2.20GHz (A mid-level server grade CPU from 2012)
Memory: 96GB
ZFS Pool Capacity: 53TB
OS: CentOS 7
Kernel: 3.10.0-514.6.1.el7.x86_64 #1 SMP Wed Jan 18 13:06:36 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux


#A heavy weight web server
CPU: Intel(R) Xeon(R) CPU E3-1225 v3 @ 3.20GHz (An entry-level server grade CPU from 2013)
Memory: 16GB
ZFS Pool Capacity: 1.8TB
OS: CentOS 7
Kernel: 3.10.0-514.6.1.el7.x86_64 #1 SMP Wed Jan 18 13:06:36 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux


#A light weight web (Apache+MySQL+PHP), and file server (Samba)
CPU: Intel(R) Core(TM) i7 CPU 920  @ 2.67GHz  (A gaming-grade CPU from 2008)
Memory: 24GB
ZFS Pool Capacity: 25TB
OS: FreeBSD 10.3
Kernel: FreeBSD 10.3-RELEASE-p11 FreeBSD 10.3-RELEASE-p11 #0: Mon Oct 24 18:49:24 UTC 2016     root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64


#A low-end backup server
CPU: Intel(R) Core(TM)2 Quad CPU Q6600 @ 2.40GHz (An entry-level CPU from 2007)
Memory: 8GB
ZFS Pool Capacity: 15TB
OS: FreeBSD 10.3
Kernel: FreeBSD 10.3-RELEASE-p11 FreeBSD 10.3-RELEASE-p11 #0: Mon Oct 24 18:49:24 UTC 2016     root@amd64-builder.daemonology.net:/usr/obj/usr/src/sys/GENERIC  amd64

If you are interested in implementing network-based ZFS, please check here for details.

Enjoy ZFS.

--Derrick

Our sponsors:

Today I am going to share my story with you on how to expand my ZFS storage. I have a giant ZFS pool, which consists of 8x2TB (RAIDZ2) and 4×1.5TB (RAIDZ1), with a total of 15TB usable spaces. Since I am running out of space, I decide to upgrade the ZFS by replacing the disks one at a time. So here is the ZFS structure:

#zpool status
        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            ada0    ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada3    ONLINE       0     0     0
            ada4    ONLINE       0     0     0
            ada5    ONLINE       0     0     0
            ada6    ONLINE       0     0     0
            ada7    ONLINE       0     0     0
            ada8    ONLINE       0     0     0
          raidz1-1  ONLINE       0     0     0
            da0     ONLINE       0     0     0
            da1     ONLINE       0     0     0
            da2     ONLINE       0     0     0
            da3     ONLINE       0     0     0


#df
storage/data     14T     13T    1.3T    91%    /storage/

What I decide to do is to replace the 1.5TB disks by 3TB disks (i.e., da[0-3]) one at a time. Basically, here are the steps you typically found on the web:

  1. Power down the server
  2. Replace the 1.5TB disk by 3TB disk one at a time
  3. Power on the server
  4. Replace the disk
  5. Resilver the entire pool.
  6. Hope the resilver process does not return any error.
  7. Repeat the above steps until all disks are replaced.
  8. Turn on the auto expand option and enjoy the extra space

Here are the corresponding commands. After replacing the first disk, you should see the following:

#zpool status
        NAME        STATE     READ WRITE CKSUM
        storage     DEGRADED     0     0     0
          raidz2-0  ONLINE       0     0     0
            ada0    ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada3    ONLINE       0     0     0
            ada4    ONLINE       0     0     0
            ada5    ONLINE       0     0     0
            ada6    ONLINE       0     0     0
            ada7    ONLINE       0     0     0
            ada8    ONLINE       0     0     0
          raidz1-1  DEGRADED     0     0     0
            da0     UNAVAI       0     0     0 cannot open
            da1     ONLINE       0     0     0
            da2     ONLINE       0     0     0
            da3     ONLINE       0     0     0

However, the pool will continue to work because it is a RAIDZ pool. So I decide to swap the 1.5TB with the 3TB disk:

#sudo zpool replace mypool da0
#zpool status
        NAME        STATE     READ WRITE CKSUM
        storage     DEGRADED     0     0     0
          raidz2-0  ONLINE       0     0     0
            ada0    ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada3    ONLINE       0     0     0
            ada4    ONLINE       0     0     0
            ada5    ONLINE       0     0     0
            ada6    ONLINE       0     0     0
            ada7    ONLINE       0     0     0
            ada8    ONLINE       0     0     0
          raidz1-1  DEGRADED     0     0     0
      replacing-2   DEGRADED     0     0     0
        da0/old     FAULTED      0     0     0  corrupted data
        da0         ONLINE       0     0     0  (resilvering)
            da1     ONLINE       0     0     0
            da2     ONLINE       0     0     0
            da3     ONLINE       0     0     0

Depending on how much data you have on the old disk and the hardware (such as motherboard, SATA configurations etc), resilvering a 1.5TB drive (with 1.3TB of data) took me about 15 hours. Personally, I recommend to start this process in the morning, then you can check the progress and start the next one in the evening. That way you can speed up the work.

So after the resilvering process is done. Make sure that you check the error status. If some files were missing, you need to delete those files first and restore them from your backup. Here is an example of the error:

#sudo zpool status -v

errors: Permanent errors have been detected in the following files: 

/storage/data/aaa
/storage/data/bbb
/storage/data/ccc

Remember, you need to delete the file first, then put the file back from your backup. If you simply replace/overwrite the file, it will not clear the error. Try to check the status again. If the error is not gone yet, you may want to scrub the zpool to trigger the resilver process.

sudo zpool scrub mypool

Or you can try to clear the error message. It will trigger the resilver process automatically.

sudo zpool clear -f mypool

I know what you are trying to say now. How come the ZFS will lose the data even I have RAIDZ and checksum enabled? I have no idea. That’s why we need back up on a different machine. Anyway, the resilver process will take another 15 hours.

After the error is cleared, repeat the steps to replace the disks one by one. Make sure that the error is cleared after every replacement. For me, replacing four hard drives took me exact 5 days, or 120 hours in total. Yes, it is not a fun job.

So after everything is completed, no error or anything bad. You try to check the pool status and you expect a magic will happen. Unfortunately, you will see the same amount of space available. Here are what you will need to do:

#I still have 1.3TB space left.
storage/data     14T     13T    1.3T    91%    /storage/
sudo zpool set autoexpand=on mypool 

Locate one of the disks you have replaced, in my case, they are da0, da1, da2 and da3

#zpool status
        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            ada0    ONLINE       0     0     0
            ada1    ONLINE       0     0     0
            ada3    ONLINE       0     0     0
            ada4    ONLINE       0     0     0
            ada5    ONLINE       0     0     0
            ada6    ONLINE       0     0     0
            ada7    ONLINE       0     0     0
            ada8    ONLINE       0     0     0
          raidz1-1  ONLINE       0     0     0
            da0     ONLINE       0     0     0 --This
            da1     ONLINE       0     0     0 --This
            da2     ONLINE       0     0     0 --This
            da3     ONLINE       0     0     0 --And this
sudo zpool online -e mypool da0

And check the space again…

#Now I have 5.3TB space available.
storage     18T     13T    5.3T    72%    /storage

FYI, here is the math behind the free space calculations. I had 4 x 1.5TB on a RAIDZ1 setup. After the upgrade, I have 4 x 3TB on a RAIDZ1 set up. The increase space will be (3TB – 1.5TB) x (4 – 1) * 0.9 = 4TB

Enjoy the new space!

If you have spent too many hours on the resilvering process, consider the old school way. My old school method is nothing new, but it is rock solid, reliable, takes shorter time and most importantly, no data will be lost. Yes, you are right, I back up the data to another server first, and then I rebuild the ZFS pool on my main server, and copy the data back. Typically, copying the 10TB of data via rsync daemon over a gigabit network will take about 3 days. So it isn’t too bad. The only down side of this solution is the down time, which is about 3 days in my case. If you decide to go with the ZFS replacement, the downtime will be minimized, because the ZFS pool will continue to work during the resilver process.

Further read: How to Improve Rsync Performance

Hope my solutions help!

–Derrick

Our sponsors:

If you have set up a ZFS system, you may want to stress test the system before putting it in a production environment. There are many different ways to stress test the system. The most common way is to fill the entire pool using dd. However, I think scrubbing the entire pool is the best.

In case you are not familiar with scrubbing, basically it is a ZFS tool to test the data integrity. The system will go through every single file and perform checksum calculation, parity check etc. During scrubbing the entire pool, the system will generate a lot of I/O traffic.

First, please make sure that your ZFS is filled with some data. Then we will scrub the system:

sudo zpool scrub mypool

Afterward, simply run the following command to check the status:

#sudo zpool status -v
  pool: storage
 state: ONLINE
  scan: scrub in progress since Sun Jan 26 19:51:03 2014
        36.6G scanned out of 14.4T at 128M/s, 32h38m to go
        0 repaired, 0.25% done

Depending on the size of your pool, it may take few hours to few days to finish the entire process.

So how does the scrubbing related to the stability? Let’s take a look to the following example. Recently I set up a ZFS system which was based on 6 hard drives. During the initial setup, everything was fine. It gives no error or anything when loading the data. However, after I scrubbed the system, something bad happened. During the process, the system disconnected two hard drives, which made the entire pool unreadable (That’s because RAID1 can afford up to one disk fails). I was feeling so lucky because it didn’t happen in a production environment. Here is the result:

sudo zpool status
  pool: storage
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
        replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-3C
  scan: none requested
config:

        NAME                      STATE     READ WRITE CKSUM
        storage                   UNAVAIL      0     0     0
          raidz1-0                UNAVAIL      0     0     0
            ada1                  ONLINE       0     0     0
            ada4                  ONLINE       0     0     0
            ada2                  ONLINE       0     0     0
            ada3                  ONLINE       0     0     0
            9977105323546742323   UNAVAIL      0     0     0  was /dev/ada1
            12612291712221009835  UNAVAIL      0     0     0  was /dev/ada0

After some investigations, I found that the error had nothing to do with the hard drives (such as bad sectors, bad cables etc). Turn out that it was related to bad memory. See? You never know what component in your ZFS is bad until you stress test it.

Happy stress-testing your ZFS system.

–Derrick

Our sponsors: