Although they are not directly related to the actual "security" aspects of data access, the safety and stability of a filesystem play an important role in data security. You can implement the tightest security known on the planet, but if you, as the data's legitimate owner, cannot access it due to a filesystem failure, security is a moot issue. Every file's data stored on a Linux/Unix filesystem must be consistent with the attributes, or metadata, associated with it. The metadata includes such information as the file type, permissions, owners, size, time stamps, and pointers to data blocks in a partition. This metadata is stored in an inode. The problem with maintaining information about a file separate from the actual contents of the file is consistency. In other words, a file's metadata must correctly describe the file before the file can be accessed. The system kernel always writes the data before the metadata because it otherwise has no idea where on the disk the data will be stored. Data loss could result if an unexpected system crash happens after the data is written but before the metadata is recorded because you have no idea where on the disk the data was stored without the metadata information. This problem gets even worse if the kernel was writing the metadata areas, such as the directory itself. Now, instead of one corrupted file, you have one corrupted filesystem; in other words, you can lose an entire directory or all the data on an entire disk partition. On a large system, this can mean hundreds of thousands of files. Even if you have a backup, restoring such a large amount of data can take a long time. NOTE Whenever a Linux server is ungracefully shut down, the system goes through the fsck routine to check the disk for errors and attempts to correct any inconsistencies upon server restart. This process can be very time consuming (much like VREPAIR on a traditional NetWare volume), especially given today's large-capacity disks. This check is also forced once every so many bootups, to make sure everything is working properly. A journaled filesystem addresses this problem by maintaining a log of metadata information. In a nutshell, the journal is a redo log of changes made to the filesystem. In a system crash, when the filesystem goes down before the sequence of changes is complete, the log can simply be "replayed" to fix resulting inconsistencies. Therefore, if system uptime and performance are important, you should use journaled filesystems. A number of such filesystems are available for Linux, including ext3 (basically ext2 with journaling), ReiserFS, XFS, and JFS. The default filesystem used by SLES 9 is ReiserFS. NOTE To learn more about the various filesystems, visit www.tldp.org/HOWTO/Filesystems-HOWTO.html. There are, however, two things to keep in mind when using a journaled filesystem, such as Reiser:
SLES 9 supports a number of different filesystems, and you can mix and match them for your needs. For instance, your users' home directories may be on a ReiserFS filesystem (default for SLES 9), but the users' confidential data can be on an encrypted ext2 filesystem. |