The ZFS on Linux with FUSE
License issues prevent the integration of ZFS with the Linux kernel, but Linux users can try the highly praised filesystem in userspace.
Sun's Zettabyte File System (ZFS)  was officially introduced to (Open)Solaris  in June 2006, replacing the legacy UFS (Unix File System). ZFS is a 128-bit filesystem with a number of interesting features, such as improved safeguards against defective disks and the ability to manage large numbers of files. Because currently there are no 128-bit data types, ZFS uses the first 64 bits and pads the rest of the structure, ignoring the unused bits in normal operations. The 128-bit design will make it easier to migrate to 128-bit types some time in the future.
The Logical Volume Manager (LVM) lets ZFS pool physical media (drives or partitions). Native RAID functionality allows users with more than two hard disks to set up a RAID pool. (Compared with RAID 5, RAID Z in ZFS has faster write access and is safer if your hardware fails.)
ZFS's list of capabilities includes an automatic snapshot feature to save filesystem states. ZFS only stores the vector to the previous snapshot. This design lets the filesystem create "clones." In contrast to a snapshot, a clone supports read and write access. ZFS also makes it easy to add new hard disks or replace defective disks on the fly. Online compression, which you might remember from NTFS, is another useful extra.
ZFS and Linux
Sun has released ZFS under the free, but non-GPL-compatible CDDL license. License compatibility problems currently prevent the possibility of ZFS integration with the Linux kernel, but no genuine replacement is in sight: right now, ZFS has a good head start on its competitors – except for Oracle's Btrfs , which has similar features.
Luckily, ZFS is also available as a FUSE (Filesystem in Userspace ) module, which makes it possible to use ZFS on Linux. The current version 0.5 of ZFS FUSE  is stable, and it performed well in our lab. Unlike conventional filesystems that operate in kernel space, FUSE operates in userspace, which means you can expect some performance hits in certain circumstances. If performance is an issue, you might consider moving to Solaris or some BSD variant, in which ZFS is already part of the kernel thanks to the less restrictive BSD license.
To set up ZFS on Ubuntu, just add the following entry to your repository file /etc/apt/sources.list:
deb http://ppa.launchpad.net/brcha/ubuntu release_name main multiverse restricted universe
First, replace release_name with gutsy, hardy, intrepid, or jaunty as needed to match your release. Then type
apt-get update && apt-get install zfs-fuse
to install the software.
Once you complete this installation, you will be working with the zfs and zpool commands at the command line.
As I mentioned earlier, ZFS manages individual disks or whole disk arrays as pools. The zpool tool is used to create a pool. When creating a pool, it does not matter whether you are working with complete disks, multiple partitions, or, in the simplest case, files. Here, I focus primarily on files, but it is not difficult to apply the concept to hard disks.
01 $ for i in $(seq 8); do dd if=/dev/zero of=/tmp/$i bs=1024 count=65536; done 02 $ zpool create testpool /tmp/1 /tmp/2 /tmp/3 /tmp/4 03 $ zpool add testpool /tmp/5 04 $ zpool replace testpool /tmp/1 /tmp/6
ZFS does not let you reduce the size later, in contrast to XFS. The pool now has a size of 256MB, and you can add new disks to increase the pool size (Listing 1, line 3); also, you can replace individual parts of the pool: The command in line 4 of Listing 1 replaces virtual disk 1 with virtual disk 6. In practical conditions, the user will not notice this replacement. However, this variant is ineffective if one of the media has failed: if this failure happens before you complete the replacement, you will lose data.
The zfs list command gives you a useful overview, including the pool name, the disk space used, and the mount point. The zpool iostat -v command gives you details of read and write operations.
Disk mirroring (aka RAID 1) is a simple approach to adding a fail-safe system (see the box titled "Mirroring"). Another RAID type that protects your data against hard disk failure, RAID 5, requires at least three disks, which is more expenditure for hardware, but with today's hard disk prices, buying three 500GB disks isn't going to cost a fortune. The effective storage capacity is calculated as follows: (number of disks – 1) x (size of the smallest disk). Three 500GB disks give you a total capacity of 1TB.
Mirroring of two disks is the equivalent of RAID Level 1. The system writes data to both disks, providing full redundancy. The failure of one disk does not entail data loss. An optional hot spare disk can step in to replace the defective disk in case of a failure.
RAID 5 (single parity) does not lose data if one disk in the array fails. Additionally, you can reconstruct the data from the defective disk on a swap, but if another disk fails before you have finished reconstruction, you lose all the data on the array. In other words, you have to be quick about providing a replacement for the defective disk.
RAID 6 improves redundancy and data protection with the use of double parity: A single disk failure will not faze the system; losing a second disk puts the array in an unsafe state. RAID 6 needs at least four disks and is thus fairly expensive because you lose two disks for parity data storage.
Just like the software-based RAID on Linux, ZFS RAID-Z and RAID-Z2 work similarly to RAID 5 and RAID 6, respectively. However, ZFS handles all write operations in a way that transfers the data and checksums atomically to ensure consistent data in case of a power failure. One big advantage is that you do not need an expensive hardware RAID controller. Single- or dual-core CPUs cost a fraction of what a controller costs and are fast enough to handle the RAID controller's tasks.
Buy this article as PDF
MSBuild is now just another GitHub project as Redmond continues its path to the light.
New rules emphasize collegiality in coding.
Upstart lands in the dust bin as a new era begins for Linux.
HP's annual Cyber Risk report offers a bleak look at the state of IT.
But what do the big numbers really mean?
.NET Core execution engine is the basis for cross-platform .NET implementations.
The Xnote trojan hides itself on the target system and will launch a variety of attacks on command.
Spammers go low-volume, and 90% of IE browsers are unpatched.
Adobe scrambles to release patches for vulnerable Flash Player.