8.4 The aufs Storage Scheme

 <  Day Day Up  >  

The aufs storage scheme has evolved out of the very first attempt to improve Squid's disk I/O response time. The "a" stands for asynchronous I/O. The only difference between the default ufs scheme and aufs is that I/Os aren't executed by the main Squid process. The data layout and format is the same, so you can easily switch between the two schemes without losing any cache data.

aufs uses a number of thread processes for disk I/O operations. Each time Squid needs to read, write, open , close, or remove a cache file, the I/O request is dispatched to one of the thread processes. When the thread completes the I/O, it signals the main Squid process and returns a status code. Actually, in Squid 2.5, certain file operations aren't executed asynchronously by default. Most notably, disk writes are always performed synchronously. You can change this by setting ASYNC_WRITE to 1 in src/fs/aufs/store_asyncufs.h and recompiling.

The aufs code requires a pthreads library. This is the standard threads interface, defined by POSIX. Even though pthreads is available on many Unix systems, I often encounter compatibility problems and differences. The aufs storage system seems to run well only on Linux and Solaris. Even though the code compiles, you may encounter serious problem on other operating systems.

To use aufs , you must add a special ./configure option:

 % ./configure --enable-storeio=aufs,ufs 

Strictly speaking, you don't really need to specify ufs in the list of storeio modules. However, you might as well because if you try aufs and don't like it, you'll be able to fall back to the plain ufs storage scheme.

You can also use the ”with-aio-threads= N option if you like. If you omit it, Squid automatically calculates the number of threads to use based on the number of aufs cache_dir s. Table 8-1 shows the default number of threads for up to six cache directories.

Table 8-1. Default number of threads for up to six cache directories

cache_dirs

Threads

1

16

2

26

3

32

4

36

5

40

6

44

After you compile aufs support into Squid, you can specify it on a cache_dir line in squid.conf :

 cache_dir aufs /cache0 4096 16 256 

After starting Squid with aufs enabled, make sure everything still works correctly. You may want to run tail -f store.log for a while to make sure that objects are being swapped out to disk. You should also run tail -f cache.log and look for any new errors or warnings.

8.4.1 How aufs Works

Squid creates a number of thread processes by calling pthread_create( ) . All threads are created upon the first disk activity. Thus, you'll see all the thread processes even if Squid is idle.

Whenever Squid wants to perform some disk I/O operation (e.g., to open a file for reading), it allocates a couple of data structures and places the I/O request into a queue. The thread processes have a loop that take I/O requests from the queue and executes them. Because the request queue is shared by all threads, Squid uses mutex locks to ensure that only one thread updates the queue at a given time.

The I/O operations block the thread process until they are complete. Then, the status of the operation is placed on a done queue . The main Squid process periodically checks the done queue for completed operations. The module that requested the disk I/O is notified that the operation is complete, and the request or response processing proceeds.

As you may have guessed, aufs can take advantage of systems with multiple CPUs. The only locking that occurs is on the request and result queues. Otherwise, all other functions execute independently. While the main process executes on one CPU, another CPU handles the actual I/O system calls.

8.4.2 aufs Issues

An interesting property of threads is that all processes share the same resources, including memory and file descriptors. For example, when a thread process opens a file as descriptor 27, all other threads can then access that file with the same descriptor number. As you probably know, file-descriptor shortage is a common problem with first-time Squid administrators. Unix kernels typically have two file-descriptor limits: per process and systemwide . While you might think that 256 file descriptors per process is plenty (because of all the thread processes), it doesn't work that way. In this case, all threads share that small number of descriptors. Be sure to increase your system's per-process file descriptor limit to 4096 or higher, especially when using aufs .

Tuning the number of threads can be tricky. In some cases, you might see this warning in cache.log :

 2003/09/29 13:42:47 squidaio_queue_request: WARNING - Disk I/O overloading 

It means that Squid has a large number of I/O operations queued up, waiting for an available thread. Your first instinct may be to increase the number of threads. I would suggest, however, that you decrease the number instead.

Increasing the number of threads also increases the queue size . Past a certain point, it doesn't increase aufs 's load capacity. It only means that more operations become queued. Longer queues result in higher response times, which is probably something you'd like to avoid.

Decreasing the number of threads, and the queue size, means that Squid can detect the overload condition faster. When a cache_dir is overloaded, it is removed from the selection algorithm (see Section 7.4). Then, Squid either chooses a different cache_dir or simply doesn't store the response on disk. This may be a better situation for your users. Even though the hit ratio goes down, response time remains relatively low.

8.4.3 Monitoring aufs Operation

The Async IO Counters option in the cache manager menu displays a few statistics relating to aufs . It shows counters for the number of open, close, read, write, stat, and unlink requests received. For example:

 % squidclient mgr:squidaio_counts ... ASYNC IO Counters: Operation       # Requests open             15318822 close            15318813 cancel           15318813 write                   0 read             19237139 stat                    0 unlink            2484325 check_callback  311678364 queue                   0 

The cancel counter is normally equal to the close counter. This is because the close function always calls the cancel function to ensure that any pending I/O operations are ignored.

The write counter is zero because this version of Squid performs writes synchronously, even for aufs .

The check_callback counter shows how many times the main Squid process has checked the done queue for completed operations.

The queue value indicates the current length of the request queue. Normally, the queue length should be less than the number of threads x 5. If you repeatedly observe a queue length larger than this, you may be pushing Squid too hard. Adding more threads may help but only to a certain point.

 <  Day Day Up  >  


Squid
Squid: The Definitive Guide
ISBN: 0596001622
EAN: 2147483647
Year: 2004
Pages: 401
Authors: Duane Wessels

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net