Tuning File Systems

team bbl


Now that you're familiar with the layout of the various types of file systems supported in Linux, let's examine how to tune the Ext3, ReiserFS, JFS, and XFS file systems. We'll show how to tune each of the file systems by using an external log and using some of the mount options that are available on each file system.

Tuning Options for Ext3: Using a Separate Journal Device

External logs improve file system performance because the log updates are saved to a different partition than the corresponding file system, thereby reducing the number of hard disk seeks.

To use an external journal for the Ext3 file system, first run mkfs on the journal device. The block size of the external journal must be the same block size at the Ext3 file system. The example in this section uses the /dev/hdb1 device as the external log for the Ext3 file system.

There are two steps to creating an external log. The first is to format the journal; the second is to format the partition and tell mkfs that the log will be external. In the following example, the Ext3 partition will be on device /dev/hda1, and the external log will be on device /dev/hdb1. The mkfs b option sets the block size for the file system.

 # mkfs.ext3 -b 4096 -O journal_dev /dev/hdb1 # mkfs.ext3 -b 4096 -J device=/dev/hdb1 /dev/hda1 

The next few examples use the tiobench program to see if an external log helps the performance of this benchmark on an Ext3 file system. The tiobench benchmark is a multithreaded I/O benchmark. It is used to measure file system performance in four basic operations: sequential read, random read, sequential write, and random write.

First, the device /dev/hda1 is formatted as ext3.

 # mkfs.ext3 /dev/hda1 # mount -t ext3 /dev/hda1 /ext3 # cd /ext3 # tar zxvf tiobench-0.3.3.tar.gz # cd tiobench-0.3.3 # make # date && ./tiobench.pl --size 500 --numruns 5 threads\ 32 && date 

The following is sample tiobench output with the log inside the partition.

[View full width]

Fri Jun 27 10:17:14 PDT 2003 Run #1: ./tiotest -t 32 -f 15 -r 125 -b 4096 -d . -T Run #2: ./tiotest -t 32 -f 15 -r 125 -b 4096 -d . -T Run #3: ./tiotest -t 32 -f 15 -r 125 -b 4096 -d . -T Run #4: ./tiotest -t 32 -f 15 -r 125 -b 4096 -d . -T Run #5: ./tiotest -t 32 -f 15 -r 125 -b 4096 -d . -T Unit information ================ File size = megabytes Blk Size = bytes Rate = megabytes per second CPU% = percentage of CPU used during the test Latency = milliseconds Lat% = percent of requests that took longer than X seconds CPU Eff = Rate divided by CPU% - throughput per cpu load Sequential Reads File Blk Num Avg Maximum Lat% Lat% CPU Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff -- - - - -- -- - - -- 2.4.20-4GB 500 4096 32 6.93 3.555% 72.342 39522.87 1.56413 0.00000 195 Random Reads File Blk Num Avg Maximum Lat% Lat% CPU Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff --- - - -- -- - 2.4.20-4GB 500 4096 32 0.89 1.011% 514.676 2536.44 0.00000 0.00000 88 Sequential Writes File Blk Num Avg Maximum Lat% Lat% CPU Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff --- - - -- -- - 2.4.20-4GB 500 4096 32 4.69 6.986% 49.635 23386.91 1.28907 0.00000 67 Random Writes File Blk Num Avg Maximum Lat% Lat% CPU Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff --- - - -- -- - 2.4.20-4GB 500 4096 32 0.53 0.434% 0.302 573.19 0.00000 0.00000 122 Fri Jun 27 10:37:14 PDT 2003 # umount /ext3

Next, to determine whether an external log increases the performance of the file system under this file system benchmark, change the configuration to have an external log. In the example, the external log is located on /dev/hdb1.

[View full width]

# mkfs.ext3 -b 4096 -O journal_dev /dev/hdb1 # mkfs.ext3 -b 4096 -J device=/dev/hdb1 /dev/hda1 # mount -t ext3 /dev/hda1 /ext3 # cd /ext3 # tar zxvf tiobench-0.3.3.tar.gz # cd tiobench-0.3.3 # make # date && ./tiobench.pl --size 500 --numruns 5 threads\ 32 && date tiobench output with log external. Fri Jun 27 11:10:17 PDT 2003 Run #1: ./tiotest -t 32 -f 15 -r 125 -b 4096 -d . -T Run #2: ./tiotest -t 32 -f 15 -r 125 -b 4096 -d . -T Run #3: ./tiotest -t 32 -f 15 -r 125 -b 4096 -d . -T Run #4: ./tiotest -t 32 -f 15 -r 125 -b 4096 -d . -T Run #5: ./tiotest -t 32 -f 15 -r 125 -b 4096 -d . -T Unit information ================ File size = megabytes Blk Size = bytes Rate = megabytes per second CPU% = percentage of CPU used during the test Latency = milliseconds Lat% = percent of requests that took longer than X seconds CPU Eff = Rate divided by CPU% - throughput per cpu load Sequential Reads File Blk Num Avg Maximum Lat% Lat% CPU Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff --- - - -- -- - 2.4.20-4GB 500 4096 32 6.54 3.496% 83.189 40436.07 2.41211 0 .00081 187 Random Reads File Blk Num Avg Maximum Lat% Lat% CPU Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff --- - - -- -- - 2.4.20-4GB 500 4096 32 0.89 0.890% 540.200 2620.12 0.00000 0 .00000 100 Sequential Writes File Blk Num Avg Maximum Lat% Lat% CPU Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff --- - - -- -- - 2.4.20-4GB 500 4096 32 4.71 6.682% 53.069 23588.61 1.31673 0 .00000 71 Random Writes File Blk Num Avg Maximum Lat% Lat% CPU Identifier Size Size Thr Rate (CPU%) Latency Latency >2s >10s Eff --- - - -- -- - 2.4.20-4GB 500 4096 32 0.53 0.432% 0.278 404.22 0.00000 0 .00000 122 Fri Jun 27 11:30:35 PDT 2003

Because tiobench does not create a large amount of metadata activity, there is no benefit to having an external log. In terms of time, the tiobench program took an additional 18 seconds to complete when the log was on an external device.

With the dbench benchmark, which creates a very large amount of metadata activity, the results show a decrease in the amount of time needed to run the benchmark and an increase in throughput for what the benchmark measures. Therefore, determining the metadata activity for your system helps determine the type of tuning that will be most useful.

In the next few examples, dbench is run first with the log inside the partition and then with the log external to the partition.

 # mkfs.ext3 /dev/hda1 # mount t ext3 /dev/hda1 /ext3 # tar zxvf dbench-1.2.tar.gz # cd dbench # make # date && ./dbench 20 && date output for dbench Fri Jun 27 14:36:39 PDT 2003 .......................+..................+..+.. +20 clients started Throughput 15.443 MB/sec (NB=19.3037 MB/sec  154.43 MBit/sec) Fri Jun 27 14:39:30 PDT 2003 

In the next example, the log is changed to use /dev/hdb1 as the external log device. When dbench is run again, the throughput increases from 15.443 MBps to 17.2484 MBps. The time taken to run the program was reduced from 2 minutes and 51 seconds to 2 minutes and 33 seconds.

 # mkfs.ext3 b 4096 O journal_dev /dev/hdb1 # mkfs.ext3  b 4096 J device=/dev/hdb1 /dev/hda1 # tar zxvf dbench-1.2.tar.gz # cd dbench # make # date && ./dbench 20  && date output for dbench Fri Jun 27 14:52:13 PDT 2003 .................................+...........+.+ +20 clients started Throughput 17.2484 MB/sec (NB=21.5605 MB/sec 172.484 MBit/sec) Fri Jun 27 14:54:46 PDT 2003 # umount /ext3 

Tuning Options for ReiserFS: Go Faster with an External Log

External logs improve file system performance because the log updates are saved to a different partition than the corresponding file system, thereby reducing the number of hard disk seeks.

To create a ReiserFS file system with the log on an external device, your system must have at least two unused partitions. The test system used in the following examples has spare partitions /dev/hda1 and /dev/hdb1. In the examples, the /dev/hdb1 partition is used for the external log.

 # mkreiserfs  -j /dev/hdb1 /dev/hda1 

In the following example, the dbench program creates file system activity. The default mount option is used with an external log on device /dev/hdb1.

 # mount t reiserfs /dev/hda1 /reiserfs # cd /reiserfs # tar zxvf dbench-1.2.tar.gz # cd dbench # make # date && ./dbench 15 && date output from dbench Sat Jun 28 10:23:06 PDT 2003 .................................+...........+.+ +15 clients started Throughput 21.7191 MB/sec (NB=27.189 MB/sec 217.191 MBit/sec) Sat Jun 28 10:24:37 PDT 2003 

The next example uses the notail mount option to increase the performance of the file system. The notail option disables the storage of small files and file tails directly into the directory tree.

 # mount t reiserfs o notail /dev/hda1 /reiserfs # cd /reiserfs # tar zxvf dbench-1.2.tar.gz # cd dbench # make # date && ./dbench 15 && date output from dbench Sat Jun 28 10:28:42 PDT 2003 .................................+...........+.+ +15 clients started Throughput 25.8765 MB/sec (NB=32.3456 MB/sec 258.765 MBit/sec) Sat Jun 28 10:29:59 PDT 2003 # cd / # umount /reiserfs 

By adding the notail mount option, the throughput of the ReiserFS file system running the dbench program with 15 clients increased from 21.7191 MBps to 25.8765 MBps. The time to run the program went from 1 minute and 31 seconds to 1 minute and 17 seconds.

Tuning Options for JFS: Go Faster with an External Log

External logs improve file system performance because the log updates are saved to a different partition than the corresponding file system, thereby reducing the number of hard disk seeks.

The following examples create a baseline for the file system by using the default option of having the log of the file system inside the volume. The test program is executed again with the log on external device /dev/hdb1.

 # mkfs.jfs /dev/hda1 # mount -t jfs /dev/hda1 /jfs 

The stress.sh script, which creates a high number of metadata changes, shows the benefit of using an external log.

 #!/bin/sh for count in 'seq 1 30'; do   echo Count: $count   mkdir a   for i in 'seq 1 10000';   do     echo  0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvw\ xyz > a/$i   done   mkdir b   for j in 'seq 1 10000';   do     ln -s 'pwd'/a/$j b/$j   done   rm -fr b   rm -fr a   done # cd /jfs # mkdir test 

If you want to try this example on your own machine, place the stress.sh script in the /jfs/test subdirectory.

 # date && ./stress.sh && date Output from stress.sh script Sat Jun 28 10:47:27 PDT 2003 Count: 1 Count: 2 ... Count: 30 Sat Jun 28 12:48:35 PDT 2003 # umount /jfs 

Tuning JFS with jfs_tune

The jfs_tune can change the location of the journal. One way to increase the file system performance is by moving the journal to an external device.

The first step is to create a journal on an external device /dev/hdb1 by using mkfs.jfs.

 # mkfs.jfs -J journal_dev /dev/hdb1 

The next step is to attach that external journal to the file system that is located on /dev/hda1.

 # jfs_tune -J device=/dev/hdb1 /dev/hda1 # mount t jfs /dev/hda1 /jfs # date && ./stress.sh && date output from stress.sh script Mon Jun 30 02:42:08 PDT 2003 Count: 1 Count: 2 ... Count: 30 Mon Jun 30 04:39:08 PDT 2003 

With an external log, the test program execution time was reduced by 4 minutes and 8 seconds.

Tuning Options for XFS

The examples in this section show three ways of tuning the XFS file system for running the dbench utility. The first example uses the defaults to format an XFS partition, with the log inside the partition. The second example uses the mount options logbufsize and logbsize. The third example uses an external log and the two mount options.

Using the Defaults

In the following example, an XFS partition is formatted with the log inside the partition.

 # mkfs.xfs f /dev/hda1 # mount t xfs /dev/hda1 /xfs # cd /xfs # tar zxvf dbench-1.2.tar.gz # cd dbench # make # date && ./dbench 30 && date Output from dbench is as follows: Fri Jun 27 15:48:52 PDT 2003 .+++++30 clients started Throughput 1.45512 MB/sec (NB=1.8189 MB/sec  14.5512 MBit/sec) Fri Jun 27 16:34:13 PDT 2003 

Using logbufsize and logbsize

The example in this section shows how to tune an XFS file system using the mount options logbufsize and logbsize.

The logbufsize option sets the number of log buffers held in memory. The value of logbufsize can be 2 to 8. Eight is the default for file systems created with a 64KB block size, 4 for file systems created with a 32KB block size, 3 for file systems created with a 16KB block size, and 2 for other block sizes. When logbufsize is set to the maximum, more active transactions can occur at once, and metadata changes can still be performed while the log is being synced to the disk. However, should a crash occur, a higher number of metadata changes is likely to be lost, relative to setting logbufsize to a smaller value. The logbsize size option sets the size of the log buffers held in memory.

 # mkfs.xfs f /dev/hda1 # mount t xfs o logbufsize=8,logbsize=32768b\  /dev/hda1 /xfs # cd dbench # date && ./dbench 30 && date Output from dbench Sat Jun 28 02:01:36 PDT 2003 +.............++.++.....++.++++++.+++++++++++30 clients started ****************************** Throughput 1.97025 MB/sec (NB=2.46281 MB/sec 19.7025 MBit/sec) Sat Jun 28 02:35:05 PDT 2003 

Placing the Log on an External Device

External logs improve file system performance because the log updates are saved to a different partition than the corresponding file system, thereby reducing the number of hard disk seeks.

The example in this section runs dbench with the same parameters as in the previous examples, but the log is placed on external device /dev/hdb1.

 # mkfs.xfs l logdev=/dev/hdb1,size=32768b f /dev/hda1 # mount t xfs o logbufsize=8,logbsize=32768b,\ logdev=/dev/hdb1 /dev/hda1 /xfs 

The mount command can be used to check the mount options for each file system, as shown in the following example.

 # mount /dev/hdb6 on / type reiserfs (rw) proc on /proc type proc (rw) devpts on /dev/pts type devpts (rw,mode=0620,gid=5) shmfs on /dev/shm type shm (rw) usbdevfs on /proc/bus/usb type usbdevfs (rw) /dev/hda1 on /xfs type xfs (rw,logbufs=8,logbsize=32768,logdev=/dev/hdb1) # cd dbench # date && ./dbench 30 && date Output from dbench Sat Jun 28 02:57:08 PDT 2003 .....+..........+........+.....+..+..+++.... +30 clients started Throughput 18.9072 MB/sec (NB=23.634 MB/sec 189.072 MBit/sec) Sat Jun 28 03:00:38 PDT 2003 

When the logbufsize and logbsize mount options are added, the throughput increases from 1.45512 MBps to 1.97025 MBps. When the log is moved to an external device, the throughput increases to 18.9072 MBps. Clearly, the external log increases file system performance under a test program that has a large amount of metadata activity.

    team bbl



    Performance Tuning for Linux Servers
    Performance Tuning for Linux Servers
    ISBN: 0137136285
    EAN: 2147483647
    Year: 2006
    Pages: 254

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net