[FreeBSD] ZFS FAULTED / kernel: (ada0:ahcich4:0:0:0): lost device

I am so happy that I finally solved a problem today. This problem had existed in my server farms for few months already. In the past few months, I had absolutely no idea what was causing the issues. Here is my story:

I built a file server using FreeBSD and ZFS. The hardware components are consumer grade hardware. Basically, it is a desktop computer with multiple hard drives. The hard drives are connected to the SATA ports on the motherboard. It is a very simple setup.

After I set up a ZFS pool, I started to load the data and stress tested the system. I noticed that the ZFS pool turned into a fault state after 15 mins, e.g.,

#sudo zpool status -v
  pool: storage
 state: FAULTED
status: One or more devices could not be opened.  There are insufficient
        replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
config:

        NAME        STATE     READ WRITE CKSUM
        storage     ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            ada4    ONLINE       0     0     0
            ada0    REMOVED      0     0     0 
            ada2    ONLINE       0     0     0
            ada5    ONLINE       0     0     0
            ada3    ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

In short, a drive is missing. So I tried to run dmesg

Oct 17 20:24:00 kernel: (ada0:ahcich4:0:0:0): lost device
Oct 17 20:24:00 kernel: (ada0:ahcich4:0:0:0): removing device entry
Oct 19 20:42:14 kernel: (ada0:ahcich3:0:0:0): lost device
Oct 19 23:19:18 kernel: (ada0:ahcich4:0:0:0): lost device
Oct 19 23:19:18 kernel: (ada0:ahcich4:0:0:0): removing device entry
Oct 19 23:19:18 kernel: (ada0:ahcich4:0:0:0): lost device
Oct 19 23:19:18 kernel: (ada0:ahcich4:0:0:0): removing device entry
Oct 19 23:19:18 kernel: (ada0:ahcich4:0:0:0): lost device
Oct 19 23:19:18 kernel: (ada0:ahcich4:0:0:0): removing device entry
Oct 19 23:19:18 kernel: (ada0:ahcich4:0:0:0): lost device
Oct 19 23:19:18 kernel: (ada0:ahcich4:0:0:0): removing device entry
Oct 19 23:19:18 kernel: (ada0:ahcich4:0:0:0): lost device
Oct 19 23:19:18 kernel: (ada0:ahcich4:0:0:0): removing device entry
Oct 19 23:33:02 kernel: (ada0:ahcich4:0:0:0): Synchronize cache failed
Oct 20 22:53:40 kernel: (ada0:ahcich4:0:0:0): WRITE_FPDMA_QUEUED. ACB: 61 40 30 af 4e 40 7a 00 00 00 00 00
Oct 20 22:53:40 kernel: (ada0:ahcich4:0:0:0): CAM status: ATA Status Error
Oct 20 22:53:40 kernel: (ada0:ahcich4:0:0:0): ATA status: 61 (DRDY DF ERR), error: 04 (ABRT )
Oct 20 22:53:40 kernel: (ada0:ahcich4:0:0:0): RES: 61 04 00 00 00 40 00 00 00 00 00
Oct 20 22:53:40 kernel: (ada0:ahcich4:0:0:0): Retrying command

I also verified the content in /dev/, e.g.,

#ls /dev/ad0

crw-r-----  1 root  operator   0x6f Oct 18 22:10 /dev/ada0

Interestingly, the device is still available. This is very confusing because the system sees the hardware (with some error), but the application could not see the hardware. So I decided to perform the following tests:

  • Reboot
  • Shutdown and reboot
  • Replacing the SATA cables
  • Connecting the hard drive to a different SATA port
  • Replacing the hard drive with a new one that is certified by the manufacturer
  • Replacing the motherboard
  • Replacing the hard drive power cable
  • Replacing the power supply
  • Updating the firmware of the motherboard

Unfortunately, none of these fixed the problem. I also checked the S.M.A.R.T. status of the hard drives but I didn’t see any issues. One thing I noticed is that the error is not consistent. It happens randomly on different SATA port and hard drive. That doesn’t make any sense to me because I saw the same thing on 3 different motherboards. Finally, I decided to try one last thing: I connected the hard drives through an USB port. Guess what, my FreeBSD box stopped complaining.

So I decided to a different approach. I used the same hardware setup (the hard drives were connected using SATA instead of USB) and loaded a Windows. The system worked perfectly fine and stable. I didn’t experience any issue at all. So I believed the problem had nothing to do with the hardware. It must be the software settings issue.

Finally, I decided to try one last thing, something I hadn’t paid attention before: BIOS Settings

AHCI/IDE

Typically there is a setting to control how the motherboard interacts with the hard drives: IDE or AHCI. I usually stick with the default settings. In my case, the default value of my motherboard is IDE. After I changed the settings to AHCI, I found that the problem is gone. Yes, it’s gone and my headache is gone too.

That was easy!

–Derrick

Our sponsors:

How to Improve rsync Performance

I need to transfer 10TB of data from one machine to another machine. Those 10TB of files are living in a large RAID which span across 7 different disks. The target machine has another large RAID which span across 12 different disks. It is not easy to copying those files locally. Therefore, I decide to copy the files over the LAN.

There are four options popping up in my head: scp, rsync, rsyncd (rsync as daemon) and netcat.

scp

scp is handy, easy to use but comes with two disadvantages: slow and not fault-tolerant. Since scp comes with the highest security, all data are encrypted before the transfer. It will slow down the overall performance because of the extra encryption stuffs (which makes the data larger), and extra computational resource (which uses more CPU). If the transfer is interrupted, there is no easy way to resume the process other than transferring everything again. Here are some example commands:

#Source machine
#Typical speed is about 20 to 30MB/s
scp -r /data target_machine:/data

#Or you can enable the compression on the fly
#Depending on the type of your data, if your data is already compressed, you may see no or negative speed improvement
scp -rC /data target_machine:/data

rsync

rsync is similar to scp. It comes with the encryption (via SSH) such that the data is safe. It also allows you to transfer the newer files only. This will reduce the amount of data being transferred. However, it comes with few disadvantages: long decision time, encryption (which increase the size of overhead) and extra computational resource(e.g., data comparison, encryption and decryption etc). For example, if I use rsync to transfer 10TB of files from one machine to another machine (where the directory on the target machine is blank), it can easily take 5 hours to determine which files will need to be transferred before the actual data transfer is initialized.

#Run on the target machine
rsync -avzr -e ssh --delete-after source_machine:/data/ /data/

#Use a less secure encryption algorithm to speed up the process
rsync -avzr --rsh="ssh -c blowfish" --delete-after source_machine:/data/ /data/

#Use an even less secure algorithm to get the top speed
rsync -avzr --rsh="ssh -c arcfour" --delete-after source_machine:/data/ /data/

#By default, rsync compares the files using checksum, file size and modification date.
#Reduce the decision process by skipping the hash check
rsync -avzr --rsh="ssh -c arcfour" --delete-after --whole-file source_machine:/data/ /data/

Anyway, no matter what you do, the top speed of rsync in a consumer-grade gigabit network is around 45MB/s. On average, the speed is around 25-35MB/s. Keep in mind that this number does not include the decision time, which can be few hours.

rsyncd (rsync as a daemon)

Thanks for the comment of our reader. I got a chance to investigate the rsync as a daemon. Basically, the idea of running rsync as a daemon is similar to rsync. On the server, we run rsync as a service/daemon. We specify which directory we want to “export” to the clients (e.g., /usr/ports). When the files get changed on the server, it records the changes so that the when the clients talk to the server, the decision time will be faster. Here is how to set up rsync server on FreeBSD

sudo nano /usr/local/etc/rsyncd.conf

And this is my configuration file:

pid file = /var/run/rsyncd.pid

#Notice that I use derrick here instead of other systems users, such as nobody
#That's because nobody does not have permission to access the path, i.e., /data/
#Either you make the source directory available to "nobody", or you change the daemon user.
uid = derrick
gid = derrick
use chroot = no
max connections = 4
syslog facility = local5
pid file = /var/run/rsyncd.pid

[mydata]
   path = /data/
   comment = data
Don't forget to include the following in /etc/rc.conf, so that the service will be started automatically.

rsyncd_enable="YES"
#Let's start the rsync service:

sudo /usr/local/etc/rc.d/rsyncd start

To pull the files from the server to the clients, run the following:

rsync -av myserver::mydata /data/

#Or you can enable compression
rsync -avz myserver::mydata /data/

To my surprise, it works much better than running rsync alone. Here are some data I collected during transferring 10TB files from ZFS to ZFS:

Bandwidth measured on the client machine: 70MB/s

zpool IO speed on the client side: 75MB/s

P.S. Initially, the speed was about 45-60MB/s, after I tweak my Zpool, I can get the top speed to 75-80MB/s. Please check out here for references.

I notice that the decision time is much faster than running rsync alone. Also the process is much more stable, with zero interruption, i.e.,

rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at io.c(521) [receiver=3.1.0]
rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(632) [generator=3.1.0]
rsync: [receiver] write error: Broken pipe (32)

NetCat

NetCat is similar to cat, except that it works at the network level. I decide to use netcat for the initial transfer. If it is interrupted, I will let rsync to kick in the process. Netcat does not encrypt the data, so the overhead is very small. If you transfer the file within a local network and you don’t care about the security, netcat is a perfect choice.

There is only one disadvantage of using netcat. It can only handle one file at a time. It doesn’t mean you need to run netcat for every single file. Instead, we can tar the file before feeding to netcat, and untar the file at the receiving end. As long as we do not compress the files, we can keep the CPU usage small.

#Open two terminals, one for the source and another one for the target machine.

#On the target machine:
#Go to the directory, e.g., 
cd /data

#Run the following:
nc -l 9999| tar xvfp -

#On the source machine:
#Go to the directory, e.g.,
cd /data

#Pick a port number that is not being used, e.g., 9999
tar -cf - . | nc target_machine 9999

Unlike rsync, the process will start right the way, and the maximum speed is around 45 to 60MB/s in a gigabit network.

Conclusion

Candidates Top Speed (w/o compression) Top Speed (w/ compression) Resume Stability Instant Start?
scp 40MB/s 25MB/s No Low Instant
rsync 25MB/s 50MB/s Yes Medium Long Preparation
rsyncd 30MB/s 70MB/s Yes High Short Preparation
netcat 60MB/s (tar w/o -z) 40MB/s (tar w/ -z) No Very High Instant

–Derrick

Our sponsors:

SFTP Without SSH Shell Access

I got an interesting question today. Our client needs to access to the server via SFTP, but they want to disable to the access to the SSH shell. Assuming that you are fully aware of the potential issues of doing it, such ask security risk etc, here is how to do it:

#Create a user account as usual, e.g.,
sudo adduser sftpuser

#Edit the User Password Profile
sudo vipw


#Type i to switch to the editor mode

#Replace the shell of the user to the following based on your OS:
Linux: /usr/libexec/openssh/sftp-server
FreeBSD: /usr/libexec/sftp-server


#Your line will look something like this:
#Linux:
sftpuser:x:1001:1001::/home/sftpuser:/usr/libexec/openssh/sftp-server

#FreeBSD:
sftpuser:x:$XX3edc8989Ra.:1001:1001::0:0:SFTP User:/home/sftpuser:/usr/libexec/sftp-server

#Type :wq to save and quit.

You may want to include the sftp command in your shell lists, i.e.,

sudo nano /etc/shells

#Include the following (Linux):
/usr/libexec/openssh/sftp-server

#Include the following (FreeBSD):
/usr/libexec/sftp-server

That’s it!

–Derrick

Our sponsors: