Serverless Network Filesystems (xFS)

This is a 1996 paper presenting a serverless network filesystem, xFS. xFS is not to be confused with the XFS journaling file system created by Silicon Graphics.

While traditional network filesystems rely on a central server machine, a serverless system utilizes computers cooperating as peers to provide all filesystem services. The major motivation for a serverless p2p filesystem is the opportunity provided by fast switched Ethernet/LANs to use LAN as an I/O backplane, harnessing physically distributed processors, memory, and disks into a single remote striped disk system.

Basically, xFS synthesizes previous innovations on scalable cache consistency (DASH), cooperative caching, disk striping (RAID, Zebra) and combines them into a serverless filesystem package. xFS dynamically distributes control processing across the system on a per-file granularity by utilizing a serverless management scheme. xFS distributes its data storage across storage server disks by implementing a software RAID using log-based network striping similar to Zebra's.

Prior technologies
RAID partitions a stripe of data into N-1 data blocks and a parity block (exclusive or of the corresponding bits of the data blocks). It can reconstruct the contents of a failed disk by taking the exclusive-OR of the remaining data blocks and the parity block.


Zebra combines LFS and RAID so that both work well in a distributed environment. Zebra has a single file manager and xFS improves that to multiple p2p file/metadata managers. Improving on Zebra, xFS also dynamically clusters disks into stripe groups to allow the system to scale to large numbers of storage servers.
xFS architecture
All data, metadata, and control can be located anywhere in the system and can be dynamically migrated during operation. xFS splits management among several metadata managers. xFS also replaces the server cache with cooperative caching that forwards data among client caches under the control of the managers.

In xFS there are four types of entities: clients, storage servers, managers, cleaners. The other entities are straightforward, so let's just explain cleaners. In a log-structured filesystem, since new blocks are always appended, some of these new blocks invalidate blocks in old segments as they provide a newer version. These invalidated blocks need to be garbage collected as they waste space. Cleaners take care of this process. xFS prescribes a distributed cleaner to keep up with the system writes.
The key challenge for xFS is locating data and metadata. Four key maps are used for this purpose: the manager map, the imap, file directories, and the stripe group map. The manager map allows clients to determine which manager to contact for a file. Manager map is small and is globally replicated to all of the managers and the clients in the system to improve performance. The imap allows each manager locate where its files are stored in the on-disk log. File directories provide a mapping from a human readable name to a metadata locator called an index number. The stripe group map provides mapings from segment identifiers embedded in disk log addresses to the set of physical machines storing the segments.

The authors have built a prototype of the xFS and presented performance results on a 32 node cluster of SPARCStation 10's and 20's. The evaluations highlight the scalability of xFS to increasing number of clients.

Comments

Popular posts from this blog

The end of a myth: Distributed transactions can scale

Hints for Distributed Systems Design

Foundational distributed systems papers

Learning about distributed systems: where to start?

Metastable failures in the wild

Scalable OLTP in the Cloud: What’s the BIG DEAL?

The demise of coding is greatly exaggerated

SIGMOD panel: Future of Database System Architectures

Dude, where's my Emacs?

There is plenty of room at the bottom