8.1 Basic Filesystem Characteristics

     

The basic premise of most filesystems is to store our data in such a way that it:

  • Is relatively easy to retrieve.

  • Stores data in a secure manner to allow access only to authorized users.

  • Offers performance features that mean retrieval minimizes any impacts on overall system performance.

This last point, relating to performance, has always been the Holy Grail of filesystem designers. Disks are invariably the slowest components in our system. To minimize this, filesystem designers employ miraculous sleight of hand with the underlying filesystem structures in an attempt to minimize the amount of time that read/write heads spend traversing the disk, as well as minimizing the time spent waiting for the platters to spin in order to position the correct sectors under the read/write heads. As disk administrators, we are trying to aid this process by employing clever technologies such as striping, mirroring, and RAID in the underlying design of logical devices.

We start our discussions by looking at HFS: the High performance Files system. As we all (probably) know, HFS is not the most high performance filesystem anymore. It serves as a basis to talk about the most prevalent filesystem in HP-UX: VxFS. With its many online capabilities, its use of ACLs, and its performance- related tuning and mount options, VxFS has become the filesystem of choice for today's administrators.

8.1.1 Large files

I want to get the problem of largefiles out of the way immediately because it applies to all filesystem types. When we create a filesystem, the default behavior, regardless of whether it is HFS or VxFS, is to not support largefiles in a particular filesystem. To establish whether your filesystem supports largefiles , we use the fsadm “F <hfsvxfs> <character device file> command. This feature can be turned ON for individual filesystems that require it with the fsadm “F <hfsvxfs> -o largefiles <filesystem> command or with a similar option to the newfs command when the filesystem is first created. A largefile is a file greater than 2GB in size . This might seem like a ridiculously small value these days. I agree with you. The problem is that the computing industry in general can't really decide on how to treat largefiles . The issue harkens back to the days of 32-bit operating systems. In a 32-bit operating system, we have an address range of 2 32 = 4GB. When we seek around in a file, we can supply an address offset from our current position. With an offset, we can go forward as well as back, i.e., the offset is a signed integer. Consequently, we don't have the entire 32 bits, but only 31 bits to specify our offset. A 31-bit address range is 2GB in size. Traditional UNIX commands like tar , cpio , dump , and restore cannot safely handle largefiles (or user IDs greater than 60,000); they are limited to files up to 2GB in size. If we are to use largefiles in our filesystems, we must understand this limitation, because some third-party backup/check-pointing routines will actually be simple interfaces to a traditional UNIX command such as cpio . All HP-UX filesystems support largefiles with the largest file (and filesystem) currently being 2TB in size (a 41-bit address range; at the moment, this seems adequate in most situations). Having to manage files of this size will require special, non-standard backup/check-pointing routines. We should check with our application suppliers to find out how they want us to deal with these issues.



HP-UX CSE(c) Official Study Guide and Desk Reference
HP-UX CSE(c) Official Study Guide and Desk Reference
ISBN: N/A
EAN: N/A
Year: 2006
Pages: 434

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net