This article is part of my main ZFS tutorial: How to improve ZFS performance. That article covers everything you need. If you already have the basic knowledge, or you just want to know how I push the I/O speed to 120+ MBps, you can skip that article and read this one.
Long story short, here is the result of my iostat:
sudo zpool iostat -v
capacity operations bandwidth
pool alloc free read write read write
---------- ----- ----- ----- ----- ----- -----
storage 7.85T 12.1T 1 1.08K 2.60K 126M < ---126 MBps!
raidz2 5.51T 8.99T 1 753 2.20K 86.9M
ada0 - - 0 142 102 14.5M
ada1 - - 0 143 102 14.5M
ada3 - - 0 146 307 14.5M
ada4 - - 0 145 409 14.5M
ada5 - - 0 144 204 14.5M
ada6 - - 0 143 204 14.5M
ada7 - - 0 172 409 14.5M
ada8 - - 0 171 511 14.5M
raidz1 2.34T 3.10T 0 349 409 39.5M
da0 - - 0 169 204 13.2M
da1 - - 0 176 0 13.2M
da2 - - 0 168 102 13.2M
da3 - - 0 173 102 13.2M
---------- ----- ----- ----- ----- ----- -----
I measured the speed while I was transferring 10TB of data from one FreeBSD-based ZFS server to another FreeBSD-based ZFS server, over a consumer-level gigabit network. Basically, every components I used are consumer-level. The hard drives I used are standard PATA and SATA hard drives (i.e., not even the light speed SSD hard drive). The data I transferred are real data (rather than all zeros or ones generated through dd).
First of all, here are the settings I used. Both server and client have similar configurations.
sudo zpool history
History for 'storage':
2014-01-22.20:28:59 zpool create -f storage raidz2 /dev/ada0 /dev/ada1 /dev/ada3 /dev/ada4 /dev/ada5 /dev/ada6 /dev/ada7 /dev/ada8 raidz /dev/da0 /dev/da1 /dev/da2 /dev/da3
2014-01-22.20:29:07 zfs create storage/data
2014-01-22.20:29:15 zfs set compression=lz4 storage
2014-01-22.20:29:19 zfs set compression=lz4 storage/data
2014-01-22.20:30:19 zfs set atime=off storage
#where the ad* and da* are nothing more than standard SATA drives:
#ZFS Drives
#8 x 2TB standard SATA hard drives connected to the motherboard
#4 x 1.5TB standard SATA hard drives connected to a PCI-e RAID card
da0: 1430799MB (2930277168 512 byte sectors: 255H 63S/T 182401C)
da1: 1430799MB (2930277168 512 byte sectors: 255H 63S/T 182401C)
da2: 1430799MB (2930277168 512 byte sectors: 255H 63S/T 182401C)
da3: 1430799MB (2930277168 512 byte sectors: 255H 63S/T 182401C)
ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 512bytes)
ada0: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada1: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada3: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada3: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada4: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada4: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada5: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada5: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada6: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada6: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada7: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada7: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada8: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada8: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
#System Drive
#A PATA 200MB drive from 2005
ada2: maxtor STM3200820A 3.AAE ATA-7 device
ada2: 100.000MB/s transfers (UDMA5, PIO 8192bytes)
ada2: 190782MB (390721968 512 byte sectors: 16H 63S/T 16383C)
Here are the corresponding hardware components:
#CPU
hw.machine: amd64
hw.model: Intel(R) Core(TM) i7 CPU 920 @ 2.67GHz
hw.ncpu: 8
hw.machine_arch: amd64
#Memory - 3 x 2GB
real memory = 6442450944 (6144 MB)
avail memory = 6144352256 (5859 MB)
#Network - Gigabit network on the motherboard / PCI-e
#If your gigabit network card is PCI, that will be your bottleneck.
re0: realtek 8168/8111 B/C/CP/D/DP/E/F PCIe Gigabit Ethernet
Other than that, I didn’t really specify any special settings in my OS. Everything else are default:
#uname -a
FreeBSD 9.2-RELEASE-p3 FreeBSD 9.2-RELEASE-p3 #0: Sat Jan 11 03:25:02 UTC 2014 [email protected]:/usr/obj/usr/src/sys/GENERIC amd64
The way how to transfer 10TB of data is through rsync and rsyncd. First of all, I set up a rsync as daemon on the server. Keep in mind that rsync as a daemon (service) is not the same as rsync. If you want to know how I set up the rsyncd, please visit How to Improve rsync Performance.
After you have set up the rsyncd on the server, I run the following on the client.
rsync -av server_ip::rsync_share_name /my_directory/
#Example
rsync -av 192.168.1.101::storage /mydata/
Notice that I didn’t enable the compression option. That’s because my files are pretty much compressed (jpeg, zip, tar, 7z etc). If your file types are different, you may want to enable the compression, i.e.,
rsync -avz server_name::rsync_share_name /my_directory/
Give it a try and see whether it improves the I/O speed.
Here are few things I have learned to improve the ZFS performance.
Keep the ZFS structure small and simple. For example, keep the number of disks in your vdev small. Previously, I set up a giant big pool with 14 disks RAIDZ in one single vdev. This is a bad idea. There is a maximum number of disks for RAIDZ. That’s why I split my disks into two groups:
#Group 1 - RAIDZ2
#8 x 2TB standard SATA hard drives connected to the motherboard
#Group2 - RAIDZ
#4 x 1.5TB standard SATA hard drives connected to a PCI-e RAID card
zpool create -f storage myzpool group1 raidz group2
Keep the ZFS settings clean. Enable the only thing you need, and disable the junk settings. In my case, I enable the best compression (lz4) and disable the access time:
2014-01-22.20:29:15 zfs set compression=lz4 storage
2014-01-22.20:30:19 zfs set atime=off storage
Keep the ZFS clean. I see the performance gain on a brand new ZFS than a few years old ZFS, even both of them contain the same data. Unfortunately, there is no easy way to clean the ZFS. The only way is to destroy the current ZFS first, create a new one and bring the data back. Typically, it takes about 2 days to transfer 10TB of data over a gigabit LAN. So it is really not too bad to spend about 5 days to rebuild your ZFS.
That’s it. Again, if you want to learn about tricks, please visit How to improve ZFS performance for more information.
–Derrick
Our sponsors: