The state of the classic NFS filesystem

Whither NFS

Article from Issue 191/2016
Author(s): , Author(s): , Author(s):

The NFS network filesystem has served Unix and Linux networks for many years, but the demise of NFS inventor Sun Microsystems as an independent company has thrust NFS into a creative crisis. Will this veteran from the early days of Unix find the strength to rise again?

Ever since Oracle acquired Sun Microsystems, the development of the once-omnipresent Unix network filesystem NFS has slowed considerably. Competitors such as Samba, and a new class of distributed network storage solutions, are competing with NFS for mindshare and market share within the open source community. Has NFS gone away? Not really, but it could surely use a burst of energy to regain some of the momentum it has lost to competitors.

NFS development is now the responsibility of the Internet Engineering Task Force (IETF). The current NFS version is number 4.1, which is described in RFC 5661 [1]. (RFC 5661 dates back to 2010, which gives an indication of the current level of development activity.)

The current Linux implementations [2] consist of several parts. The NFS server, the NFS filesystem, and the Sun remote procedure call (RPC) are part of the Linux kernel. Today, admins will only want to deal with NFSv4. The NFSv3 architecture from the Unix heyday is not fit for today's security landscape (Figure 1); for instance, NFSv3 handles authentication client-side, blindly trusting in its security.

Figure 1: Structural comparison between the obsolete NFSv3 and the current NFSv4.

To help you start the current, kernel-based NFS server, some distributions offer tool packages – on Ubuntu, for instance, you'll find the nfs-kernel-server package. Among other things, you'll find the exportfs command for exporting NFS shares and matching unit files for systemd. As an alternative to the NFS server that is built into the kernel, some distributions provide the rpc.nfsd daemon, which runs entirely in user space. rpc.nfsd is no longer used much in practice.

In any case, a separate package bundles some useful command-line tools. These NFS-utils (currently version 1.3.3) are found on Ubuntu, for example, in the nfs-common package. It is generally worth considering how well the NFS maintainer for your choice of Linux distribution keeps pace with the utils; Debian, for example, currently uses version 1.2.8. The tools include the commands for mounting NFS shares, as well as some analysis utilities, including nfsiostat, mountstat, and showmount. You will even find patches for the NFS-utils [3] that retrofit support for systemd.

These patches have already reached the Git repository, but some distributions still do not include them. For example, Debian Unstable only had version 1.2.8 of the NFS-utils when this issue went to press. However, the patches are only included as of version 1.3.

Big Deficit: Poor Documentation

If you like to stay current with new features in the NFS (kernel) development, you will certainly feel the further impact of the disappearance of Sun Microsystems: The quality and quantity of documentation – or rather the lack of it – is striking. If you look around, you will stumble over some out-of-date Internet sites like the Linux NFS FAQ [2]. The Linux NFS wiki [4] is also a mix of outdated and current information.

Administrators are most likely to find good documentation offered by providers of commercial NFS-related services, such as NetApp [5]. Panasas is mainly involved with the parallel-storage version of NFS known as parallel NFS (pNFS). The company provides its own site [6] with information and even training videos on pNFS technology.

NFS developers and users discuss events such as BakeAThon and Connectathon on the mailing list for Linux NFS [7]. The Nfsv4bat.org website offers presentation slides and even some videos of the two events.

Development Status of the Software

Both the NFS wiki and many other NFS-related sites lack information as to which NFS functionality is available with which kernel version. It is thus a Sisyphean task to reveal the current state of development of the NFS server in the kernel, the client programs, and your own choice of Linux distribution.

A movie by Panasas [8], which only covers kernel 3.2, gives initial insights into the health state. A changelog at functional level and a feature matrix by kernel version, like then one in the Btrfs wiki [9], are completely missing.

Linux originally served as a prototype platform for the implementation of NFSv4.1. As a result, all reasonably recent kernel versions offer the functionality of NFSv4.1. According to the kernel documentation, the implementation of the NFSv4.1 server focuses on the mandatory functions defined by the NFS standard [10].

Compared with NFSv4, NFSv4.1 offers, among other things, sessions, directory delegates, and in particular, parallel access to files stored on multiple servers through pNFS.

Data Collection for Storage: pNFS

If multiple clients try to request and edit data at the same time, the NFS server quickly becomes a bottleneck, especially if the files are distributed over multiple hard disks and storage systems. pNFS seeks to parallelize data access and thus eliminate typical NFS bottlenecks.

The first pNFS implementation appeared in 2006 in Linux kernel 2.6.14; it never made it beyond the prototype status. Caution: Quite a few documents on the Internet confusingly refer to this early implementation.

With a current kernel, the NFS server only acts as a metadata server in a similar way to distributed filesystems such as Ceph. In this capacity, it only tells the clients where they can find the requested data. The clients can then optionally request the files directly from the storage systems (Figure 2). pNFS is factorial; the client can also use regular NFSv4 I/O [8].

Figure 2: With pNFS, the clients directly access the storage systems.

When a client wants to access the data, it first knocks on the NFS server's door. The server tells the client where to find the data and which protocol it needs for access. This and other meta-information is summarized by the NFS standard in what is known as a layout [11]. Depending on the type of storage, the layout may contain other data. The NFS standard distinguishes four types of layout: File, Block, Object, and Flexi-Layouts.

Only the specification of the file layout is part of the NFSv4.1 standard; the other layouts are defined by separate standards. With a file layout, clients can work directly on individual files that are distributed across multiple servers. An implementation based on the Global File System 2 (GFS2) distributed filesystem [12] does exist under Linux.

The block layout allows access to distributed, block-based devices. According to the kernel documentation [13], the Linux NFS server currently only exports the XFS filesystem via the block layout; this also needs to reside on a distributed memory system – typically an iSCSI array. The filesystem also needs to exist directly on the exported volume; Linux does not allow techniques such as striping or concatenation of volumes as of this writing. The server automatically selects the block layout as long as it supports the filesystem. To make access work, the client needs to build the kernel with the CONFIG_PNFS_BLOCK option enabled, run the blkmapd daemon from the NFS-utils, and mount the filesystem with the version 4.1 protocol (using mount-o vers=4.1).

To avoid data loss, it is essential for the NFS server to block any non-responsive clients. To block non-responsive clients, the server needs a fencing script, the content of which is not documented. But at least the kernel documentation provides a small, uncommented sample script [13].

The new pNFS SCSI Layout by Christoph Hellwig promises to improve the situation; the layout works especially in the context of the XFS filesystem. Clients can directly access the SCSI LUNs using the SCSI layout. The file server currently needs XFS, and striping and concatenation are not allowed. The server automatically enables support for SCSI-layout if:

  • the kernel is built with the CONFIG_NFSD_SCSI option
  • you exported the filesystem with the pnfs parameter
  • the SCSI device can handle persistent reservations.

On the client side, the conditions are the same as for a block layout.

In the case of the object layout, access is usually via T10 Object-based Storage Device Commands (OSD) and thus relies on specific SCSI commands. On Linux, you will find an implementation based on the EXOFS object filesystem with RAID 0 striping, RAID-1 mirroring, and RAID 5.

The flexible file layout, which is typically abbreviated Flexfiles or Flexi-Layout, is fairly new. Flexible file is designed to reduce the communication with the metadata server [14]. A first implementation made its way into the kernel a year ago.

The reading material is also poor for pNFS: Admins need to rummage through the texts in the kernel documentation [15].

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy Linux Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Subscribe to our ADMIN Newsletters

Support Our Work

Linux Magazine content is made possible with support from readers like you. Please consider contributing when you’ve found an article to be beneficial.

Learn More

News