[FreeBSD]PHP does not work after upgrading Apache to 2.2.27

I upgraded the Apache to 2.2.27 on my FreeBSD box via portmaster. The upgrade went very smooth. After the upgrade, I found that Apache no longer rendered the PHP page correctly. In the other words, it displayed the source code of the PHP files instead of executing the code.

Before you start doing anything, please make sure that your website is not accessible from public. For example, most web applications like to include the password information in the source code, which is accessible by public if the PHP engine is failed. You may want to restrict the public access during the fix. The easiest way is to set up a .htaccess file and restrict the access by IP address.

Let’s come back to the fix. For some reasons, Apache/PHP team decide to make something fun to the FreeBSD sysadmin, because they think FreeBSD sysadmin have plenty of spare time. Here is what they decide to do:

Originally, we install Apache first, then we install PHP. When we install the PHP, we pick an option such that the PHP will install a PHP engine used by Apache. In the recent release, they decide to remove the engine from the standard PHP package. For example, if you simply use the portmaster to upgrade your PHP (In my case, PHP 5.4 and 5.5), the engine will be missing. That’s why Apache fails to render the PHP files.

Here is what you will need to do. I suggest you to finish reading the following before doing the fix. Trust me. It may save you couple hours.

First, make sure that Apache is working fine.

If you are using PHP 5.4, make sure that you install this port: /usr/ports/www/mod_php5. If you are using PHP 5.5, install this one instead: /usr/ports/www/mod_php55. Make sure that you select the ZTS option. You can do it by running:

sudo make config

Notice that installing this port will reinstall the PHP package (/usr/ports/lang/php5 or /usr/ports/lang/php55). Make sure that the ZTS option is selected in PHP package as well.

Now try to restart the Apache server. That should make the Apache to render the PHP files instead of dumping it.

#sudo /usr/local/etc/rc.d/apache stop
#sudo /usr/local/etc/rc.d/apache start

Test the PHP files again. If you see the PHP result instead of PHP source code, congratulations! You are 20% done.

Now go back to the command line and run the following:

#List the PHP extensions installed by PHP 
php -m

#Check whether the PHP extensions are loaded properly by PHP
php -v

For some reasons, I noticed that many extensions were missing during the reinstallation, such as sessions, json etc. Let’s install them back:

#For PHP 5.4
cd /usr/ports/lang/php5-extensions

#For PHP 5.5
cd /usr/ports/lang/php55-extensions


sudo make reinstall clean

Don’t forget the pecl and related packages as well. After that, try to restart your Apache server and clean up the php extension configuration:

nano /usr/local/etc/php/extensions.ini

You will see a mess. Try to clean up the duplicated extensions and run the php -m and php -v again.

During the fix, I notice that everything works fine except for MySQL package. Interestingly, when I ran php -m, the mysql was available. However when I ran phpinfo(), it was missing. I found that this problem only happens with PHP 5.4, but not PHP 5.5. Therefore, I decided to remove the PHP 5.4 and loaded PHP 5.5 instead.

Simply repeat the installation and restart the server. If possible, try to reboot the machine.

So here is the summary:

  1. Remove PHP, Extensions and PECL packages (sudo make deinstall).
  2. Restart Apache (sudo /usr/local/etc/rc.d/apache stop; sudo /usr/local/etc/rc.d/apache start).
  3. Verify installed extensions with php -m; php -v; and phpinfo().
  4. Verify the result by loading the pages on the web.
  5. If your page is failed, try to find out which extension is missing by checking the error log, which is typically available in in /var/log/

Hope my solutions help!

–Derrick

Our sponsors:

install: /usr/ports/…/doc/: Inappropriate file type or format

While I tried to upgrade both of my FreeBSD server (9 & 10) today, I got the following error:

install: /usr/ports/…/doc/: Inappropriate file type or format

Actually, here is the complete error message:

===>  Staging for memcached-1.4.17_1
===>   memcached-1.4.17_1 depends on shared library: libevent-2.0.so - found
===>   Generating temporary packing list
/usr/bin/make  install-recursive
Making install in doc
/usr/bin/make  install-am
test -z "/usr/local/man/man1" || /bin/mkdir -p "/usr/ports/databases/memcached/work/stage/usr/local/man/man1"
 install  -o root -g wheel -m 444 memcached.1 '/usr/ports/databases/memcached/work/stage/usr/local/man/man1'
test -z "/usr/local/bin" || /bin/mkdir -p "/usr/ports/databases/memcached/work/stage/usr/local/bin"
  install  -s -o root -g wheel -m 555 memcached '/usr/ports/databases/memcached/work/stage/usr/local/bin'
test -z "/usr/local/include/memcached" || /bin/mkdir -p "/usr/ports/databases/memcached/work/stage/usr/local/include/memcached"
 install  -o root -g wheel -m 444 protocol_binary.h '/usr/ports/databases/memcached/work/stage/usr/local/include/memcached'
install  -o root -g wheel -m 555 /usr/ports/databases/memcached/work/memcached-1.4.17/scripts/memcached-tool /usr/ports/databases/memcached/work/stage/usr/local/bin
install  -o root -g wheel -m 444 /usr/ports/databases/memcached/work/memcached-1.4.17/doc/ /usr/ports/databases/memcached/work/stage/usr/local/man/man1
install: /usr/ports/databases/memcached/work/memcached-1.4.17/doc/: Inappropriate file type or format
*** [post-install] Error code 71

Stop in /usr/ports/databases/memcached.
*** [install] Error code 1

Stop in /usr/ports/databases/memcached.

Initially, I thought the problem was caused by the syntax error of the code, therefore, I tried to do the following:

#Shut down the server
sudo /usr/local/etc/rc.d/memcached stop

#Remove Memcached from the system
sudo make deinstall

#Clean up the package
sudo make clean

#Compile the package
sudo make

#Install the package
sudo make install

Unfortunately, it still gave the same error message. I noticed that the error message has something to do with the documentation, so I decide to remove the documentation option in the installation page, i.e.,

#Enter the package setup
sudo make config

#Remove the "doc" option.

Unfortunately, it still gave the same response. I then tried to investigate the cause of the error. So I tried to copy the error command and re-ran it:

install  -o root -g wheel -m 444 /usr/ports/databases/memcached/work/memcached-1.4.17/doc/ /usr/ports/databases/memcached/work/stage/usr/local/man/man1

Which return the exact the same error message:

install: /usr/ports/databases/memcached/work/memcached-1.4.17/doc/: No such file or directory

So I came up an idea. Since I don’t really need the documentation, why not I disable the document option?

cd /usr/ports/databases/memcached

sudo nano Makefile

Change the following from:

post-install:
        ${INSTALL_SCRIPT} ${WRKSRC}/scripts/memcached-tool ${STAGEDIR}${PREFIX}/bin
        ${INSTALL_MAN} ${WRKSRC}/doc/${MAN1} ${STAGEDIR}${MAN1PREFIX}/man/man1
        @${MKDIR} ${STAGEDIR}${DOCSDIR}

To:

post-install:
        ${INSTALL_SCRIPT} ${WRKSRC}/scripts/memcached-tool ${STAGEDIR}${PREFIX}/bin
#        ${INSTALL_MAN} ${WRKSRC}/doc/${MAN1} ${STAGEDIR}${MAN1PREFIX}/man/man1
        @${MKDIR} ${STAGEDIR}${DOCSDIR}

That’s right, I just want to comment out the installation of the man/documentation part. Now, let’s try to reinstall the package again.

#sudo make install

Guess what, it works! Have fun!

–Derrick

Our sponsors:

How to Stress Test ZFS System

If you have set up a ZFS system, you may want to stress test the system before putting it in a production environment. There are many different ways to stress test the system. The most common way is to fill the entire pool using dd. However, I think scrubbing the entire pool is the best.

In case you are not familiar with scrubbing, basically it is a ZFS tool to test the data integrity. The system will go through every single file and perform checksum calculation, parity check etc. During scrubbing the entire pool, the system will generate a lot of I/O traffic.

First, please make sure that your ZFS is filled with some data. Then we will scrub the system:

sudo zpool scrub mypool

Afterward, simply run the following command to check the status:

#sudo zpool status -v
  pool: storage
 state: ONLINE
  scan: scrub in progress since Sun Jan 26 19:51:03 2014
        36.6G scanned out of 14.4T at 128M/s, 32h38m to go
        0 repaired, 0.25% done

Depending on the size of your pool, it may take few hours to few days to finish the entire process.

So how does the scrubbing related to the stability? Let’s take a look to the following example. Recently I set up a ZFS system which was based on 6 hard drives. During the initial setup, everything was fine. It gives no error or anything when loading the data. However, after I scrubbed the system, something bad happened. During the process, the system disconnected two hard drives, which made the entire pool unreadable (That’s because RAID1 can afford up to one disk fails). I was feeling so lucky because it didn’t happen in a production environment. Here is the result:

sudo zpool status
  pool: storage
 state: UNAVAIL
status: One or more devices could not be opened.  There are insufficient
        replicas for the pool to continue functioning.
action: Attach the missing device and online it using 'zpool online'.
   see: http://illumos.org/msg/ZFS-8000-3C
  scan: none requested
config:

        NAME                      STATE     READ WRITE CKSUM
        storage                   UNAVAIL      0     0     0
          raidz1-0                UNAVAIL      0     0     0
            ada1                  ONLINE       0     0     0
            ada4                  ONLINE       0     0     0
            ada2                  ONLINE       0     0     0
            ada3                  ONLINE       0     0     0
            9977105323546742323   UNAVAIL      0     0     0  was /dev/ada1
            12612291712221009835  UNAVAIL      0     0     0  was /dev/ada0

After some investigations, I found that the error had nothing to do with the hard drives (such as bad sectors, bad cables etc). Turn out that it was related to bad memory. See? You never know what component in your ZFS is bad until you stress test it.

Happy stress-testing your ZFS system.

–Derrick

Our sponsors:

ZFS Performance Boost/Improvement: How I push the I/O speed to 126MBps

This article is part of my main ZFS tutorial: How to improve ZFS performance. That article covers everything you need. If you already have the basic knowledge, or you just want to know how I push the I/O speed to 120+ MBps, you can skip that article and read this one.

Long story short, here is the result of my iostat:

sudo zpool iostat -v

               capacity     operations    bandwidth
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
storage     7.85T  12.1T      1  1.08K  2.60K   126M < ---126 MBps!
  raidz2    5.51T  8.99T      1    753  2.20K  86.9M
    ada0        -      -      0    142    102  14.5M
    ada1        -      -      0    143    102  14.5M
    ada3        -      -      0    146    307  14.5M
    ada4        -      -      0    145    409  14.5M
    ada5        -      -      0    144    204  14.5M
    ada6        -      -      0    143    204  14.5M
    ada7        -      -      0    172    409  14.5M
    ada8        -      -      0    171    511  14.5M
  raidz1    2.34T  3.10T      0    349    409  39.5M
    da0         -      -      0    169    204  13.2M
    da1         -      -      0    176      0  13.2M
    da2         -      -      0    168    102  13.2M
    da3         -      -      0    173    102  13.2M
----------  -----  -----  -----  -----  -----  -----

I measured the speed while I was transferring 10TB of data from one FreeBSD-based ZFS server to another FreeBSD-based ZFS server, over a consumer-level gigabit network. Basically, every components I used are consumer-level. The hard drives I used are standard PATA and SATA hard drives (i.e., not even the light speed SSD hard drive). The data I transferred are real data (rather than all zeros or ones generated through dd).

First of all, here are the settings I used. Both server and client have similar configurations.

sudo zpool history
History for 'storage':
2014-01-22.20:28:59 zpool create -f storage raidz2 /dev/ada0 /dev/ada1 /dev/ada3 /dev/ada4 /dev/ada5 /dev/ada6 /dev/ada7 /dev/ada8 raidz /dev/da0 /dev/da1 /dev/da2 /dev/da3
2014-01-22.20:29:07 zfs create storage/data
2014-01-22.20:29:15 zfs set compression=lz4 storage
2014-01-22.20:29:19 zfs set compression=lz4 storage/data
2014-01-22.20:30:19 zfs set atime=off storage
#where the ad* and da* are nothing more than standard SATA drives:
#ZFS Drives
#8 x 2TB standard SATA hard drives connected to the motherboard
#4 x 1.5TB standard SATA hard drives connected to a PCI-e RAID card

da0: 1430799MB (2930277168 512 byte sectors: 255H 63S/T 182401C)
da1: 1430799MB (2930277168 512 byte sectors: 255H 63S/T 182401C)
da2: 1430799MB (2930277168 512 byte sectors: 255H 63S/T 182401C)
da3: 1430799MB (2930277168 512 byte sectors: 255H 63S/T 182401C)
ada0: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 512bytes)
ada0: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada1: 300.000MB/s transfers (SATA 2.x, UDMA6, PIO 8192bytes)
ada1: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada3: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada3: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada4: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada4: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada5: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada5: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada6: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada6: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada7: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada7: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)
ada8: 300.000MB/s transfers (SATA 2.x, UDMA5, PIO 8192bytes)
ada8: 1907729MB (3907029168 512 byte sectors: 16H 63S/T 16383C)


#System Drive
#A PATA 200MB drive from 2005
ada2: maxtor STM3200820A 3.AAE ATA-7 device
ada2: 100.000MB/s transfers (UDMA5, PIO 8192bytes)
ada2: 190782MB (390721968 512 byte sectors: 16H 63S/T 16383C)

Here are the corresponding hardware components:

#CPU
hw.machine: amd64
hw.model: Intel(R) Core(TM) i7 CPU         920  @ 2.67GHz
hw.ncpu: 8
hw.machine_arch: amd64


#Memory - 3 x 2GB
real memory  = 6442450944 (6144 MB)
avail memory = 6144352256 (5859 MB)

#Network - Gigabit network on the motherboard / PCI-e
#If your gigabit network card is PCI, that will be your bottleneck.
re0: realtek 8168/8111 B/C/CP/D/DP/E/F PCIe Gigabit Ethernet

Other than that, I didn’t really specify any special settings in my OS. Everything else are default:

#uname -a
FreeBSD 9.2-RELEASE-p3 FreeBSD 9.2-RELEASE-p3 #0: Sat Jan 11 03:25:02 UTC 2014     [email protected]:/usr/obj/usr/src/sys/GENERIC  amd64

The way how to transfer 10TB of data is through rsync and rsyncd. First of all, I set up a rsync as daemon on the server. Keep in mind that rsync as a daemon (service) is not the same as rsync. If you want to know how I set up the rsyncd, please visit How to Improve rsync Performance.

After you have set up the rsyncd on the server, I run the following on the client.

rsync -av server_ip::rsync_share_name /my_directory/

#Example
rsync -av 192.168.1.101::storage /mydata/

Notice that I didn’t enable the compression option. That’s because my files are pretty much compressed (jpeg, zip, tar, 7z etc). If your file types are different, you may want to enable the compression, i.e.,

rsync -avz server_name::rsync_share_name /my_directory/

Give it a try and see whether it improves the I/O speed.

Here are few things I have learned to improve the ZFS performance.

Keep the ZFS structure small and simple. For example, keep the number of disks in your vdev small. Previously, I set up a giant big pool with 14 disks RAIDZ in one single vdev. This is a bad idea. There is a maximum number of disks for RAIDZ. That’s why I split my disks into two groups:

#Group 1 - RAIDZ2
#8 x 2TB standard SATA hard drives connected to the motherboard

#Group2 - RAIDZ
#4 x 1.5TB standard SATA hard drives connected to a PCI-e RAID card

zpool create -f storage myzpool group1 raidz group2

Keep the ZFS settings clean. Enable the only thing you need, and disable the junk settings. In my case, I enable the best compression (lz4) and disable the access time:

2014-01-22.20:29:15 zfs set compression=lz4 storage
2014-01-22.20:30:19 zfs set atime=off storage

Keep the ZFS clean. I see the performance gain on a brand new ZFS than a few years old ZFS, even both of them contain the same data. Unfortunately, there is no easy way to clean the ZFS. The only way is to destroy the current ZFS first, create a new one and bring the data back. Typically, it takes about 2 days to transfer 10TB of data over a gigabit LAN. So it is really not too bad to spend about 5 days to rebuild your ZFS.

That’s it. Again, if you want to learn about tricks, please visit How to improve ZFS performance for more information.

–Derrick

Our sponsors:

How to Remote Desktop To Linux From Windows Using XRDP

If you are looking for ways to remote desktop to Linux from Windows, and you are sick of VNC, you are in the right place.

I have been looking for a solution to do something very simple. I want to remotely access the desktop of my Linux servers from Windows machine. Of course, it must be at a usable level.

I’ve tried different applications before, incluing XMing, XManager (X11 forwarding), VNC, 2X, NoMachine, ranging from open-source to commercial. Unfortunately, they offer nothing close to the Windows to Windows Remote Desktop experience.

Why XMing is bad?

XMing allows you to run a remote application (e.g., Firefox) on your desktop. It works great as long as you keep the session available. Once you close your session, there is no way to go back to the previous session. For example, suppose I am browsing a website using the Firefox on the remote server via XMing. If I decide to close the XMing (without closing the browser), there is no way for me to go back to the page I left off later. Of course, in reality, it doesn’t make too much sense to use a Firefox on a remote server. I was referring to some software that takes days to run, such as R, Matlab etc.

By the way, it is an application level, meaning that it does not work with the desktop level.

Why VNC is bad?

VNC is nearly perfect. It offers something similar to Windows Remote Desktop, i.e., I can access to the desktop level. If I work on something, I can get my previous session back even my session is closed. There are only two disadvantages: Slow, and not for multiple users.

VNC uses a stone-age protocol to delivery the remote desktop experience: Screen capture. When there is a change on the remote server, it takes a screen shot and send it to the client. You can imagine how slow it will be over the Internet. Although they released a better algorithm, which works directly at the graphic card level, i.e., only send the modified area to the client instead of everything. However the new protocol is still far from usable. The Microsoft Remote Desktop just blows VNC far far away.

Another thing I don’t like VNC is that it is at the machine/server level, not the user level. VNC uses its own user authorization. When you set up a VNC service, you define your own password. That’s it. In the other words, if I open a desktop on the server as derrick. Everybody will be able to see my desktop content via VNC as long as they enter the correct password. Unless I go to the server physically and switch to a different user, otherwise there is no way to protect the content of my desktop.

The good thing about VNC is that it is cross-platform. It works on OS X, Windows, Linux etc. Wait a minute. Java is cross-platform too. Do you want to use it?

Why XManager is bad?

XManager is better than VNC. It uses some special protocol such that the speed is not an issue. However, it is suffered in the session problem like XMing. If I remote desktop to the server using XManager, and I close my session later, there is no way for me to go back to my previous sessions. In the other words, I will lose all opened applications and contents.

And no, the session is still running silently on the server. They are just orphans. XManager doesn’t allow you to connect to the old session when you try to establish a new connection.

Solution: xrdp

I am so happy that I finally found something useful and usable. That’s xrdp. Here is a step-by-step tutorial on how to install it on Fedora/CentOS Linux. The instruction will be similar to Debian/Ubuntu Linux. The set up time should take no more than 5 minutes.

#Install xrdp
sudo yum install -y xrdp

If your Linux is too old, or your default repository does not have xrdp available, you may want to run the following:

#Don't forget to use the right (i686 or x86_64) architecture.

sudo rpm -Uvh  http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
sudo rpm -Uvh  http://rpms.famillecollet.com/enterprise/remi-release-6.rpm
sudo yum update -y

sudo yum install -y xrdp

Next, we want to start the xrdp service

sudo /etc/rc.d/init.d/xrdp start

You may want to enable this service during the boot up:

sudo chkconfig  xrdp on

Don’t forget to open the port 3389 in your firewall:

sudo nano /etc/sysconfig/iptables

-A INPUT -m state --state NEW -m tcp -p tcp --dport 3389 -j ACCEPT

Don’t forget to restart the firewall:

sudo service iptables stop
sudo service iptables start

Now, run the remote desktop on Windows, and enter the IP address of the server to see what happen.

–Derrick

Our sponsors:

[FreeBSD]mount: /dev/da0p2: R/W mount of / denied. Filesystem is not clean – run fsck. Forced mount will invalidate journal contents: Operation not permitted

When I tried to mount a hard drive on FreeBSD today, I got the following error message.

sudo mount -t ufs /dev/da0p2 /mnt/
mount: /dev/da0p2: R/W mount of / denied. Filesystem is not clean - run fsck. Forced mount will invalidate journal contents: Operation not permitted

Here is how to fix it:

sudo mount -r -t ufs /dev/da0p2 /mnt/

where ufs refers to the file system of FreeBSD.

That’s it.

–Derrick

Our sponsors:

[FreeBSD]Problem to Update cURL-7.31.0

When I tried to update the cURL to cURL 7.31.0 today for my FreeBSD, it stopped and gave the following error messages:

configure: using CFLAGS: -O2 -pipe -DLDAP_DEPRECATED -fno-strict-aliasing
configure: CFLAGS error: CFLAGS may only be used to specify C compiler flags, not macro definitions. Use CPPFLAGS for: -DLDAP_DEPRECATED
configure: error: Can not continue. Fix errors mentioned immediately above this line.
===>  Script "configure" failed unexpectedly.
Please report the problem to [email protected] [maintainer] and attach the
"/usr/ports/ftp/curl/work/curl-7.31.0/config.log" including the output of the
failure of your make command. Also, it might be a good idea to provide an
overview of all packages installed on your system (e.g. a /usr/sbin/pkg_info
-Ea).
*** [do-configure] Error code 1

Stop in /usr/ports/ftp/curl.
*** [build] Error code 1

Stop in /usr/ports/ftp/curl.

===>>> make failed for ftp/curl
===>>> Aborting update

===>>> Update for ftp/curl failed
===>>> Aborting update

===>>> Killing background jobs
Terminated

===>>> You can restart from the point of failure with this command line:
       portmaster  ftp/curl

===>>> Exiting

It is very simple the fix this problem. However, the prerequisite is to give up the support to LDAP. If you are not sure whether your cURL needs LDAP or not, you probably don’t need it.

cd /usr/ports/ftp/curl
sudo make config

Remove the LDAP related features, such as LDAP, LDAPS. Try to re-build cURL again.

sudo make clean
sudo make

If everything looks good (i.e., not complaining any more), you can resume the update process, e.g.,

sudo portmaster -Da

Now your FreeBSD should be happy.

–Derrick

Our sponsors:

[FreeBSD]Portsnap / gunzip: can’t stat: files/… .gz: No such file or directory

My FreeBSD got an unknown fever today. When I updated my ports using portsnap, it gave the following error message:

#sudo portsnap fetch
Looking up portsnap.FreeBSD.org mirrors... 6 mirrors found.
Fetching snapshot tag from your-org.portsnap.freebsd.org... done.
Fetching snapshot metadata... done.
Updating from Thu May 23 09:08:53 CDT 2013 to Thu May 23 09:25:25 CDT 2013.
Fetching 0 metadata patches. done.
Applying metadata patches... done.
Fetching 0 metadata files... done.
gunzip: can't stat: files/992a1325cdc9a00a3543aa38fdf58903cdf70eaee02b8bb8aebea5505ac7b3f8.gz: No such file or directory
Fetching 0 patches. done.
Applying patches... done.
Fetching 0 new ports or files... done.
Building new INDEX files... gunzip: can't stat: /var/db/portsnap/files/09f65f8a730283fd31d068a5927ed46d95e37540f89090c257d7809b75116293.gz: No such file or directory
gunzip: can't stat: /var/db/portsnap/files/e3d3219617c1ea87cdfac7c8df0a52d611b191be8a80fd97f511277dff4cce77.gz: No such file or directory
gunzip: can't stat: /var/db/portsnap/files/8c2576279258f0d1b8762df8fc1e0cb4bcfcd23b6b09cdb4e7d68886af35ed7d.gz: No such file or directory
done.

Apparently, something in /var/db/portsnap/ is broken. Many people will try to remove /var/db/portsnap/ and run the command again. Do not do it. It will make portsnap failed. Instead, do the following:

sudo cp -r /var/db/portsnap /var/db/portsnap_backup
sudo rm -Rf /var/db/portsnap/tag /var/db/portsnap/files/*
sudo portsnap fetch extract
sudo portsnap update

Now your portsnap should be happy.

–Derrick

Our sponsors:

[Solved]FreeBSD: Problem to Update glib20

I decided to install Java on my FreeBSD box today. It wasn’t a very good experience. Primary the system is not automated. It requires a lot of manual works.

Anyway, after waiting for couple hours, I found that the process was stuck on glib. The system could not build the /usr/ports/devel/glib20. Here is the error message:

===>  Building for glib-2.34.3
gmake  all-recursive
gmake[1]: Entering directory `/usr/ports/devel/glib20/work/glib-2.34.3'
Making all in .
gmake[2]: Entering directory `/usr/ports/devel/glib20/work/glib-2.34.3'
gmake[2]: Nothing to be done for `all-am'.
gmake[2]: Leaving directory `/usr/ports/devel/glib20/work/glib-2.34.3'
Making all in m4macros
gmake[2]: Entering directory `/usr/ports/devel/glib20/work/glib-2.34.3/m4macros'
gmake[2]: Nothing to be done for `all'.
gmake[2]: Leaving directory `/usr/ports/devel/glib20/work/glib-2.34.3/m4macros'
Making all in glib
gmake[2]: Entering directory `/usr/ports/devel/glib20/work/glib-2.34.3/glib'
gmake  all-recursive
gmake[3]: Entering directory `/usr/ports/devel/glib20/work/glib-2.34.3/glib'
Making all in libcharset
gmake[4]: Entering directory `/usr/ports/devel/glib20/work/glib-2.34.3/glib/libcharset'
gmake[4]: Nothing to be done for `all'.
gmake[4]: Leaving directory `/usr/ports/devel/glib20/work/glib-2.34.3/glib/libcharset'
Making all in update-pcre
gmake[4]: Entering directory `/usr/ports/devel/glib20/work/glib-2.34.3/glib/update-pcre'
gmake[4]: Nothing to be done for `all'.
gmake[4]: Leaving directory `/usr/ports/devel/glib20/work/glib-2.34.3/glib/update-pcre'
Making all in .
gmake[4]: Entering directory `/usr/ports/devel/glib20/work/glib-2.34.3/glib'
  CC       gstrfuncs.lo
  CC       gthreadpool.lo
gstrfuncs.c:330: error: expected '=', ',', ';', 'asm' or '__attribute__' before 'get_C_locale'
gstrfuncs.c: In function 'g_ascii_strtod':
gstrfuncs.c:700: warning: implicit declaration of function 'strtod_l'
gstrfuncs.c:700: warning: implicit declaration of function 'get_C_locale'
gstrfuncs.c: In function 'g_ascii_formatd':
gstrfuncs.c:902: error: 'locale_t' undeclared (first use in this function)
gstrfuncs.c:902: error: (Each undeclared identifier is reported only once
gstrfuncs.c:902: error: for each function it appears in.)
gstrfuncs.c:902: error: expected ';' before 'old_locale'
gstrfuncs.c:904: error: 'old_locale' undeclared (first use in this function)
gstrfuncs.c:904: warning: implicit declaration of function 'uselocale'
gstrfuncs.c: In function 'g_ascii_strtoull':
gstrfuncs.c:1148: warning: implicit declaration of function 'strtoull_l'
gstrfuncs.c: In function 'g_ascii_strtoll':
gstrfuncs.c:1195: warning: implicit declaration of function 'strtoll_l'
gmake[4]: *** [gstrfuncs.lo] Error 1
gmake[4]: *** Waiting for unfinished jobs....
gmake[4]: Leaving directory `/usr/ports/devel/glib20/work/glib-2.34.3/glib'
gmake[3]: *** [all-recursive] Error 1
gmake[3]: Leaving directory `/usr/ports/devel/glib20/work/glib-2.34.3/glib'
gmake[2]: *** [all] Error 2
gmake[2]: Leaving directory `/usr/ports/devel/glib20/work/glib-2.34.3/glib'
gmake[1]: *** [all-recursive] Error 1
gmake[1]: Leaving directory `/usr/ports/devel/glib20/work/glib-2.34.3'
gmake: *** [all] Error 2
*** Error code 1

Stop in /usr/ports/devel/glib20.
*** Error code 1

Stop in /usr/ports/devel/glib20.

This is not a popular problem. After I google a while, I found a very similar problem here. Basically, the author suggests that the problem may be caused by an uncleaned run of freebsd-update. In short, you need to run the program twice, i.e.,

sudo freebsd-update fetch install
sudo reboot
sudo freebsd-update install

Obviously, it didn’t work for me. So I decided to try my last solution: pkg_add

sudo pkg_add -r glib20

and I tried to resume the installation:

cd /usr/ports/java/jdk16
sudo make install

It worked!

Hope this little trick is helpful to you.

–Derrick

Our sponsors:

[FreeBSD/Linux]How To Remove ZFS Meta Data

I have many hard drives circulating among my servers for testing purpose. For example, I took a hard drive from one server and put it on a different server. After doing this for many times, I’ve noticed that the ZFS has put many header information / meta data left on my hard drive. While it does not do anything harmful to the normal ZFS operation, I think it is not a good idea to have some outdated information living on my hard drive.

Here is an example:

#sudo zpool import

  pool: storage
    id: 4394681882400895515
 state: UNAVAIL
status: The pool was last accessed by another system.
action: The pool cannot be imported due to damaged devices or data.
   see: http://www.sun.com/msg/ZFS-8000-EY
config:

        storage                   UNAVAIL  insufficient replicas
          raidz1-0                UNAVAIL  insufficient replicas
            12688516256739208392  UNAVAIL  cannot open
            ada3                  ONLINE
            4218969245912188584   UNAVAIL  cannot open
            1537006695366032450   UNAVAIL  cannot open
            8194123525800888894   UNAVAIL  cannot open
            13778624724471040977  UNAVAIL  cannot open

  pool: storage
    id: 12159013771499288095
 state: FAULTED
status: One or more devices contains corrupted data.
action: The pool cannot be imported due to damaged devices or data.
        The pool may be active on another system, but can be imported using
        the '-f' flag.
   see: http://www.sun.com/msg/ZFS-8000-5E
config:

        storage                FAULTED  corrupted data
          6113585248511400089  UNAVAIL  corrupted data

Although ZFS provides a way (zpool labelclear) to remove this information, it only works if the hard drive is still attached to the server. If the hard drive is missing, there is nothing you can do. For example, the following command will be failed:

#sudo zpool labelclear -f ada3

I googled for solutions and I found many idea. Unfortunately, none of them works.

Anyway, I came up a solution that is quick, easy and simple. Since ZFS stores the header information in the first and last sector of the hard drive, all I need to do is to wipe out the first and the last sector. That’s it.

How To Remove ZFS Meta Data – Linux

#replace /dev/sdXX with that actual ID of your hard drive
dd if=/dev/zero of=/dev/sdXX bs=512 count=10
dd if=/dev/zero of=/dev/sdXX bs=512 seek=$(( $(blockdev --getsz /dev/sdXX) - 4096 )) count=1M

That’s it for Linux. Below is the FreeBSD version.

How To Remove ZFS Meta Data – FreeBSD

First, you will need to identify which hard drive you want to clean up. The easiest way is to use dmesg

#dmesg


or 

#dmesg | grep ada | grep MB | grep -v 'MB/s'


which will return something like this:


ada0: 381554MB (781422768 512 byte sectors: 16H 63S/T 16383C)
ada1: 1430799MB (2930277168 512 byte sectors: 16H 63S/T 16383C)
ada2: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada3: 953869MB (1953525168 512 byte sectors: 16H 63S/T 16383C)
ada4: 1430799MB (2930277168 512 byte sectors: 16H 63S/T 16383C)

Next, I need to know which one is the system hard drive, something I don’t want to touch.

#df

/dev/ada0p2    362G    3.6G    329G     1%    /
devfs          1.0k    1.0k      0B   100%    /dev

In this example, my goal is very clear. I need to wipe clear ada1, ada2, ada3 and ada4, and leave ada0 untouched.

Next, I need to clear the first sector:

#sudo dd if=/dev/zero of=/dev/ada1 count=1 bs=512k

repeat this for the other hard drives

Next, I need to clear the last sector. You can use the sector information from dmesg (if available), or you can use the following command to find the location of the last sector:

#sudo diskinfo -c /dev/ada1

which will return something like the following:

/dev/ada1
        512             # sectorsize
        1500301910016   # mediasize in bytes (1.4T)
        2930277168      # mediasize in sectors
        0               # stripesize
        0               # stripeoffset
        2907021         # Cylinders according to firmware.
        16              # Heads according to firmware.
        63              # Sectors according to firmware.
        S1Y6J1KS710613  # Disk ident.

I/O command overhead:
        time to read 10MB block      0.091942 sec       =    0.004 msec/sector
        time to read 20480 sectors   1.945038 sec       =    0.095 msec/sector
        calculated command overhead                     =    0.090 msec/sector

In this example, the total number of sector is 2930277168 (mediasize in sectors).

To keep things simple, I am going to wipe out the hard drive from 2930270000 to the end (Replace the last four digits of the sector size to zero).

#sudo dd if=/dev/zero of=/dev/ada1 oseek=293027000

Now, repeat the same thing for each hard drive. Keep in mind that the sector size of each hard drive may not be the same. So it is better to run the command and get the sector information first.

After running these commands, the ZFS meta information should be removed. You can verify your work by doing this:

#sudo zfs import


which should output nothing.

That’s it. Enjoy building a new ZFS!

–Derrick

Our sponsors: