Add-ons for New Version of Firefox 21

Every time Firefox releases a major upgrade, we love it because it comes with lots of cool features. At the mean time, we hate it because lots of our favorite add-ons are not working with the new Firefox. Sometimes, the owners of the add-ons might be too busy, and it might take a while to release a compatible version. That’s why I put some of my favorite add-ons here. They are 100% compatible with the new Firefox, and I only modified the source codes that by-pass the version compatibly check.

In the other words, the add-on will work the way it was in the old version, and you will receive an upgrade if the owner releases a newer version.

So far I have made the following add-ons available in the latest version of Firefox:

Add-On Download Comment Author’s URL
BandWidthTester 0.5.9 Download Not tested N.A
BlockSite 0.7.1.1 Download Tested Add-on Homepage
Bookmark Duplicate Detector 0.7.5 Download Not tested Add-on Homepage
CopyPlainText 0.3.4 Download Not tested N.A.
del.icio.us 1.2.1 Download Not tested N.A.
FireSheep 0.1.1 Download
Linux (Compiled from Git)
Not tested Add-on Homepage
Google Calendar Tab 3.8 Download Tested N.A.
Multiproxy Switch 1.33 Download Tested Add-on Homepage
PermaTabs Mod 1.93 Download Not Tested Add-on Homepage
Snap Links Plus 1.08 Download Tested N.A.

If you need any add-on, please post in the comment below. I will try to make it available here.

Enjoy your new Firefox!

–Derrick

Our sponsors:

ZFS: Compression VS Deduplication (Dedup) in Simple English

Last Edited: Jan 17, 2021

Many people are confused between the ZFS compression and ZFS deduplication because they are so similar. Both of them are designed to reduce the size of the data being stored in the storage. Let me explain the difference between them in simple English.

1. This is how your data looks like originally (Assuming only one unique file):

2. This is how your data look like after being stored in a ZFS pool with compression enabled.

3. This is how your data look like after being stored in a ZFS pool with deduplication enabled.

4. Let say we are storing three identical files, i.e.,

5. ZFS: Compression Only

6. ZFS: Deduplication Only

7. ZFS: Compression + Deduplication

The biggest difference between deduplication and compression is the scope. File compression works at the file level. For example, if you have three identical files, ZFS will store the compressed files three times. Deduplication works at the block level. A block is simply the basic unit of the ZFS storage (e.g., 512 bytes, 4k etc). Imagine ZFS needs to store a big file. What it will do is to divide the file into multiple chunks. Each chunk will be stored in a block. What deduplication does is to remember the content of each block (checksum), and avoid storing the same content again. In other words, deduplication works at a narrower level (think of it as a molecule).

One of the reasons why the drug are usually tested on mice because some of mouse genes are 99% identical to human genes. Imagine we need to store the mouse genes into the database. All we need is to store the gene of mouse once. Later on if we need to store the human genes into the database, we can reference the mouse one rather than storing the same copy again.

Of course, enabling both compression and deduplication will save lots of free space. However, it comes with a very high price tag. If you like to enable deduplication, you need to make sure that you have at least 2GB of memory per 1TB of storage. For example, if your ZFS pool is 10TB, you need to have 20GB of memory installed in your system. Otherwise, you will experience a huge performance hit.

Hope this article helps you to understand the difference between compression and deduplication.

–Derrick

Our sponsors:

Top Reason Why You Should NOT Use Microsoft Exchange in Your Business

Today, I sent a message to a company through their contact page on their website. It is a standard contact page, i.e., you need to fill in your contact information and the details of your request, and they will follow up with you later. An hour later, I received a reply from them, saying that they would like more information from me. Therefore, I reply to their email. Few seconds later, I got the following:

Delivery has failed to these recipients or distribution lists:

[email protected]
Your message wasn't delivered because of security policies. Microsoft Exchange will not try to redeliver this message for you. Please provide the following diagnostic text to your system administrator.
Sent by Microsoft Exchange Server 2007


Diagnostic information for administrators:

....

(Another 100 lines of error messages)

Initially, I thought I made a mistake when typing the email address. Therefore, I redid it and verified every single letter in the email address. Unfortunately, I got the same message again. After trying it for 5 times, I gave up, and this company loses a sale.

There are few things we’ve learned here. First, never display very technical error messages to the customers. They are very boring for non-engineers. Second, don’t assume that every customer is patient. Not every one is willing to re-send the same email for 5 times. Third, it takes many years to build a city but it only takes few hours to destroy it. With today’s technology, it is too easy to spread out the bad words.

I think this is not a right way to run a custom request management system (or ticket system). It should never yell to the customer. Instead, it should let the stuffs to evaluate the customer’s reply rather than letting the Microsoft Exchange Server to do it.

I don’t recommend using Microsoft Exchange for your business. It just hurts your business.

–Derrick

Our sponsors:

[Solved]Failed to enable the ‘dataready’ Accept Filter

After I updated the Apache to 2.2.22 for my FreeBSD box today, I got a problem:

#apachectl stop
#apachectl start

[warn] (2)No such file or directory: Failed to enable the 'dataready' Accept Filter

If you search for Failed to enable the ‘dataready’ Accept Filter on Google, you probably will find a lot of posts about this solution, like the following:

#kldload accf_http

Or include the following in the boot loader (/boot/loader.conf):

accf_http_load="YES"

However, even you already done these two things, this problem still exists.

#kldload accf_http
kldload: can't load accf_http: File exists

Why? That’s because the problem is not coming from accf_http. Instead, the problem is the missing dataready filter, which is accf_data. To solve this problem, simply do the following. First, update the /boot/loader.conf and add the following into the file:

accf_data_load="YES"

Of course, any changes to the boot loader requires a reboot. If you don’t want to reboot the machine, simple load the module manually and restart the Apache, i.e.,

#kldload accf_data
#apachectl stop
#apachectl start

That’s it! Apache will stop complaining the dataready filter and will work happily.

–Derrick

Our sponsors: