File Systems

team bbl


This section provides an overview of file systems on Linux and discusses the virtual file system, the ext2 file system, LVM and RAID, volume groups, device special files, and devfs.

Virtual File System (VFS)

One of the most important features of Linux is its support for many different file systems. This makes it very flexible and well able to coexist with many other operating systems. Virtual file system (VFS) allows Linux to support many, often very different, file systems, each presenting a common software interface to the VFS. All of the details of the Linux file systems are translated by software so that all file systems appear identical to the rest of the Linux kernel and to programs running in the system. The Linux Virtual File System layer allows you to transparently mount many different file systems at the same time.

The Linux virtual file system is implemented so that access to its files is as fast and efficient as possible. It must also make sure that the files and their data are maintained correctly.

ext2fs

The first file system that was implemented on Linux was ext2fs. This file system is the most widely used and the most popular. It is highly robust compared to other file systems and supports all the normal features a typical file system supports, such as the capability to create, modify, and delete file system objects such as files, directories, hard links, soft links, device special files, sockets, and pipes. However, a system crash can leave an ext2 file system in an inconsistent state. The entire file system has to be validated and corrected for inconsistencies before it is remounted. This long delay is sometimes unacceptable in production environments and can be irritating to the impatient user. This problem is solved with the support of journaling. A newer variant of ext2, called the ext3 file system, supports journaling. The basic idea behind journaling is that every file system operation is logged before the operation is executed. Therefore, if the machine dies between operations, only the log needs to be replayed to bring the file system back to consistency.

LVM and RAID

Volume managers provide a logical abstraction of a computer's physical storage devices and can be implemented for several reasons. On systems with a large number of disks, volume managers can combine several disks into a single logical unit to provide increased total storage space as well as data redundancy. On systems with a single disk, volume managers can divide that space into multiple logical units, each for a different purpose. In general, a volume manager is used to hide the physical storage characteristics from the file systems and higher-level applications.

Redundant Array of Inexpensive Disks (RAID) is a type of volume management that is used to combine multiple physical disks for the purpose of providing increased I/O throughput or improved data redundancy. There are several RAID levels, each providing a different combination of the physical disks and a different set of performance and redundancy characteristics. Linux provides four different RAID levels:

  • RAID-Linear is a simple concatenation of the disks that comprise the volume. The size of this type of volume is the sum of the sizes of all the underlying disks. This RAID level provides no data redundancy. If one disk in the volume fails, the data stored on that disk is lost.

  • RAID-0 is simple striping. Striping means that as data is written to the volume, it is interleaved in equal-sized "chunks" across all disks in the volume. In other words, the first chunk of the volume is written to the first disk, the second chunk of the volume is written to the second disk, and so on. After the last disk in the volume is written to, it cycles back to the first disk and continues the pattern. This RAID level provides improved I/O throughput.

  • RAID-1 is mirroring. In a mirrored volume, all data is replicated on all disks in the volume. This means that a RAID-1 volume created from n disks can survive the failure of n1 of those disks. In addition, because all disks in the volume contain the same data, reads to the volume can be distributed among the disks, increasing read throughput. On the other hand, a single write to the volume generates a write to each of the disks, causing a decrease in write throughput. Another downside to RAID-1 is the cost. A RAID-1 volume with n disks costs n times as much as a single disk but only provides the storage space of a single disk.

  • RAID-5 is striping with parity. This is similar to RAID-0, but one chunk in each stripe contains parity information instead of data. Using this parity information, a RAID-5 volume can survive the failure of any single disk in the volume. Like RAID-0, RAID-5 can provide increased read throughput by splitting large I/O requests across multiple disks. However, write throughput can be degraded, because each write request also needs to update the parity information for that stripe.

Volume Groups

The concept of volume-groups (VGs) is used in many different volume managers.

A volume-group is a collection of disks, also called physical-volumes (PVs). The storage space provided by these disks is then used to create logical-volumes (LVs).

The main benefit of volume-groups is the abstraction between the logical- and physical-volumes. The VG takes the storage space from the PVs and divides it into fixed-size chunks called physical-extents (PEs). An LV is then created by assigning one or more PEs to the LV. This assignment can be done in any arbitrary orderthere is no dependency on the underlying order of the PVs, or on the order of the PEs on a particular PV. This allows LVs to be easily resized. If an LV needs to be expanded, any unused PE in the group can be assigned to the end of that LV. If an LV needs to be shrunk, the PEs assigned to the end of that LV are simply freed.

The volume-group itself is also easily resizeable. A new physical-volume can be added to the VG, and the storage space on that PV becomes new, unassigned physical-extents. These new PEs can then be used to expand existing LVs or to create new LVs. Also, a PV can be removed from the VG if none of its PEs are assigned to any LVs.

In addition to expanding and shrinking the LVs, data on the LVs can be "moved" around within the volume-group. This is done by reassigning an extent in the LV to a different, unused PE somewhere else in the VG. When this reassignment takes place, the data from the old PE is copied to the new PE, and the old PE is freed.

The PVs in a volume-group do not need to be individual disks. They can also be RAID volumes. This allows a user to get the benefit of both types of volume management. For instance, a user might create multiple RAID-5 volumes to provide data redundancy, and then use each of these RAID-5 volumes as a PV for a volume-group. Logical-volumes can then be created that span multiple RAID-5 volumes.

Device Special Files

A typical Linux system has at least one hard disk, a keyboard, and a console. These devices are handled by their corresponding device drivers. However, how would a user-level application access the hardware device? Device special files are an interface provided by the operating system to applications to access the devices. These files are also called device nodes that reside in the /dev directory. The files contain a major and minor number pair that identifies the device they support. Device special files are like normal files with a name, ownership, and access permissions.

There are two kinds of device special files: block devices and character devices. Block devices allow block-level access to the data residing on the device, and character devices allow character-level access to the device. When you issue the ls l command on a device, if the returned permission string starts with a b, it is a block device; if it starts with a c, it is a character device.

devfs

The virtual file system, devfs, manages the names of all the devices. devfs is an alternative to the special block and character device node that resides on the root file system. devfs reduces the system administrative task of creating device nodes for each device in the system. This job is automatically handled by devfs. Device drivers can register devices to devfs through device names instead of through the traditional major-minor number scheme. As a result, the device namespace is not limited by the number of major and minor numbers.

A system administrator can mount the devfs file system many times at different mount points, but changes to a device node are reflected on all the device nodes on all the mount points. Also, the devfs namespace exists in the kernel even before it is mounted. Essentially, this makes the availability of device nodes independent of the availability of the root file system.

With the traditional solution, a device node is created in the /dev directory for each and every conceivable device in the system, irrespective of the existence of the device. However, in devfs, only the necessary and sufficient device entries are maintained.

    team bbl



    Performance Tuning for Linux Servers
    Performance Tuning for Linux Servers
    ISBN: 0137136285
    EAN: 2147483647
    Year: 2006
    Pages: 254

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net