ZFS Compression: lz4 VS lzjb

ZFS offers a new compression method in the latest version: lz4. It claims to be better than lzjb. Since lzjb is pretty good already, I am curious to find out how good will lz4 be comparing to lzjb. According to my tests, lz4 is performing better than lzjb in terms of spacing saving and I/O, but not too much.

lz4 VS lzjb: Space Saving

Long story short, here is what I did. I set up two servers with brand new ZFS settings. One server is set to lzjb, and the other server is set to lz4. Both servers have exact the same hardware, software, operating system etc. Both of them are loaded with the same set of data (10TB) via rsync and rsyncd. These data are real data that I use everyday, which includes office documents (PDF, Word, Excel), photos(Raw and JPEG), music(mp3), video(mpeg), zip, source codes, binary applications, database (MySQL, Redis), webserver files (PHP, HTML, CSS), etc. Notice that some of these files are already compressed (such as jpeg, zip etc), and the file types are not evenly distributed (e.g., 40% of the files are jpeg, 25% are docs, 10% are zip, 5% are something else). I want to make a test in a more realistic scenario.

Also, all ZFS settings are set BEFORE loading the data. This will ensure that all data on the same server share the same compression settings. Below is the summary of the used space. Keep in mind that the space saving test is independent to the hardware, such as CPU type, memory, speed of the disks, etc. Just like running the same command to compress the same set of files in two systems. We expect the result will have the same size. The only difference will be the time spent on compressing the data.

#ZFS with lzjb
Filesystem    512-blocks        Used     Avail Capacity  Mounted on
storage/data 23007987368 22996620466  11366902   100%    /storage/data

#Used space: 22996620466 blocks = 10965.6 GB
#ZFS with lz4
Filesystem    512-blocks        Used      Avail Capacity  Mounted on
storage/data 31527470077 22942284913 8585185164    73%    /storage/data

#Used space: 22942284913 blocks = 10939 GB

I found that for 10TB of data, the server with lzjb uses 26GB more space than lz4, which translate to 0.23% (26GB out of 10TB). The difference is quite small and not too significant. However, when your ZFS is nearly full or you are too busy to upgrade your system, the extra saving may matter a lot.

lz4 VS lzjb: Time Saving

I am also curious to know how much time I will save by switching from lzjb to lz4. Therefore I did another simple test. First, I generated a 1GB of random data and wrote to the lzjb system. After that, I destroyed the zpool and rebuilt the zpool with lz4. This will eliminate other factors such as hardware, network traffic etc, because both tests were done within the same system.

#ZFS with lzjb
time dd if=/dev/random of=/storage/data/file.out bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 23.725955 secs (44195313 bytes/sec)

real    0m24.144s
user    0m0.024s
sys     0m18.326s
#ZFS with lz4
time dd if=/dev/random of=/storage/data/file.out bs=1M count=1000
1000+0 records in
1000+0 records out
1048576000 bytes transferred in 22.589257 secs (46419234 bytes/sec)

real    0m22.802s
user    0m0.016s
sys     0m18.273s

P.S. The /dev/random in FreeBSD is nothing like the one in Linux. It is very fast.

By just upgrading from lzjb to lz4, the time is reduced from 24.144 to 22.802 seconds, which translate to 5.5% (1.342s out of 24.144s) improvement.

This result also agrees with my overall experience: The improvement is small and not noticeable. In fact, I barely notice any performance improvement after switching from lzjb to lz4, which primarily includes transferring the files between Windows and FreeBSD-based ZFS system via Samba on a gigabit network. The I/O speed are about the same. However, if you are talking about a busy server with lots of traffic, a 5% improvement will be something.

Anyway, this is what I recommend. If lz4 is available in your ZFS version, use it. It can’t be wrong.

sudo zfs set compression=lz4 myzpool


Our sponsors:

7 Replies to “ZFS Compression: lz4 VS lzjb”

  1. Sinior

    > These data are real data that I use everyday, which includes documents, photos, music, video, zip, source codes, binary etc.

    Most of the documents mentioned in this test list are already compressed, and they won’t be compressed anymore by either lzjb nor lz4, nor gzip. Since none of them will compress anything, the difference will be zero.

    Both lzjb & lz4 are also pretty fast at detecting this situation to skip over it. Although lz4 has a significant advantage at this task, it is likely to be completely dwarfed by bandwidth issues.

    Another metric to look at is CPU usage : when bandwidth is the limiting factor, a faster algorithm will not improve the speed of an operation, but will make it using less CPU, which can be important is this CPU is also useful for other tasks. It’s an important difference for web server and databases for example.

    Bottom line : to feel a storage difference, you need compressible data. The test set describes in this post seems to have too little of it.

    Other comparison tests have been performed and published over Internet. They mention something like a 10% storage gain for lz4 over lzjb, on compressible data. Typical scenario are log files.

    Obviously, servers handling primarily log files are meant for purpose than servers handling primarily video files. Since both cases exist, it’s important to state for what kind of data the test is done, in order to avoid over-generalization of one result.

    • Derrick Post author

      Hello Sinior,

      Thanks for your feedback. In reality, it is rare that we only put one kind of files in the server. Usually the file types will be mixed. For example, a production web server will contain source code (small, compressible), database (medium to large, compressible), image (already compressed) and user attachments (depends). Notice that they are not evenly distributed. Let’s take one of the production servers I run for work for example.

      The server contains about 10GB of data. The source codes and libraries (compressible) use roughly about 20MB of data, images takes about 200MB of space. The database takes up about 1GB of data. The application logs take about 1GB. The rest are user attachments, which is over 8GB. As you see from this scenario, it is difficult to generate an ideal environment (i.e., every files types are evenly distributed) in reality.

      What do you think?


  2. Sinior

    You are perfectly right.
    It completely depends on the kind of application the server is providing.

    Whenever user data storage is part of the formula, there is a good probability that “high bandwidth data” get a pretty large share, and they are typically, video first, photo second, and music a distant third, with sometimes compressed packages getting 2nd or 3rd place. Combined, they may well represent over >90% of total data, which means there is very little left to compress.

    But application server can provide quite different services, without the capability to store user data. We can list a little few examples, such as packet analyzer, science probe experiment, purchase accounting, code repository, etc. but the possibilities are limitless (and boring). Let’s just say : in all these cases, data tend to be much more compressible, and scenario with compression ratio above 50% are commonplace.

    In the above example, source code is compressible (but very small), log file is very compressible, database may be compressible, but it depends too much on the type of data and the type of storage strategy, and the rest, user attachment, is likely to be already compressed (images, photos, compressed files, etc.).

    As soon as you have some “user data storage” in the formula, it tends to tilt heavily towards “already compressed data” side.
    For all other cases, the conclusion can be radically different.

  3. Mattyj

    Sorry I couldn’t find a contact link. There’s a typo in the sentence just after the two yellow text boxes, regarding speed/time:

    “By just upgrading from lzjb to lz4, the time is reduced from 24.144 to 12.802 seconds, which translate to 5.5% (1.342s out of 24.144s) improvement.”

    12.802 should be 22.802.

  4. Zikey

    LZJB is pretty good already? Maybe for performance and simplicity. But its compression ratios are easily beaten by alternatives like LZP, LZO and Snappy and perhaps others. Even LZSS from 1982 offers higher compression (although maybe it’s not as fast).

    Not that I dislike LZJB, but I feel like there is definitely room for improvement in that class. The code of LZ4 is larger and more complex, so it’s hard to compare it. It may be faster and have a better compression ratio, but it is not a simple algorithm that fits under 100 lines of code like LZJB is. For a library many times larger, LZ4 is not that many times better.

  5. Shavkat


    you have interesting posts about ZFS, but this one is kind of flawed. Additionally to the comments made above, I would point you attention to following points.

    If you compare the FS sizes (given in sectors), then you have in one case 10TB and in another case 15 TB. First, it contradicts to the statement that settings and hardware are all the same in both cases – they are not. More important is that the usage is almost 100% in one case (very bad for ZFS) and 73% in another (pretty much good according to 80% rule of thumb). I don’t think that these have some consequences on the compressibility test. But this is not a good test.

    Next, you compare the timing for two different compression algorithms using random data – which is not compressible by definition. What ZFS does in this case is that it writes data uncompressed. That means that you have simply two timings of ZFS writing uncompressed data. It has nothing to do with the timing of ZFS compression algorithms – they were not really involved, just for the short time for ZFS to recognize that the data can not be compressed and it should be written uncompressed. If you compare the sys times (the actual processor time which was consumed by OS/ZFS) then in both cases you see practically the same value. And 5% difference comes only in real total time – but there might be many different OS/hardware reasons for it. At least, you have two different pool layouts – see my point above.


Leave a Reply

Your email address will not be published. Required fields are marked *