The Guru College

The Limits of Hardware

After some testing, and some reports on the zfs-discuss list, it appears that my backup fileserver simply doesn’t have the horsepower, the memory, or the I/O bandwidth to handle dedup. The first problem: it’s a single core Sempron 3300+. The CPU utilization goes through the roof when doing any serious dedup work. Second: there is only a single GB of RAM installed, which has to be shared by the OS, running applications, the ZFS ARC, and handle all the dedup operations. The 3rd and final issue is I/O bandwidth: the pool in the backup server is a pair of 750 GB IDE drives, sharing a single controller in a master/slave relationship. To make matters worse, all of this over 100bT ethernet (not that I’ve ever seen the link saturated).

When I turn on dedup and compression, my average speed drops from 6-7 mbps to 2-3 mbps. In contrast, vault, my “full size” file server regularly pushes 25-30 mbps through the same switches. Some commenters suggest adding a SSD-based L2ARC device to increase the speed of dedup. If I’m going to spend money on the systems at this point, though, the money is going into upgrading the larger file server, and moving hand-me-downs to the test server.

All of this reinforces my comments from a few weeks ago that I really need to move vault off of Solaris Nevada, and onto OpenSolaris. I’d also like to spend a small chunk of cash on upgrades, but that is on hold for now.

Site Problems | Home | Fear, Uncertainty and ‘Industry Standards’