Let's consider a real-life example. The Solaris operating system from Sun Microsystems contains a kernel table called the Directory Name Lookup Cache (DNLC). The DNLC is a kernel cache that matches the name of a recently accessed file with its vnode (a virtual inode, an extra level of abstraction that makes writing interfaces to filesystems easier and more portable) if the file name isn't too long. Keeping this table in memory means that if a file is opened once by a process, and then opened again within a short period of time, the second open() won't require a directory lookup to retrieve the file's inode. If many of the open()s performed by the system operate on the same files over and over, this strategy could yield a significant performance win. The DNLC table has a fixed size to make sure that it consumes a reasonable amount of memory. If the table is full and a new file is opened, this file's information is added to the DNLC and an older, less recently used entry in the table is removed to make space for the new data. The size of this table can be set manually using the ncsize variable in the /etc/system file; otherwise, it's derived from MAXUSERS, a general sizing parameter used for most tables on the system, and a variable called max_nprocs, which governs the total number of processes that can run simultaneously on the system. In Solaris version 2.5.1, the equation used to determine ncsize was ncsize = (max_nprocs + 16 + MAXUSERS) + 64 In Solaris 2.6, this calculation changed to ncsize = 4 * (max_nprocs + MAXUSERS) + 320 In Solaris 2.5.1, unless manually set, max_nprocs = 10 + 16 * MAXUSERS. I do not know if this calculation changed in Solaris 2.6. If MAXUSERS is set to 2048, which is typical for large servers running very large numbers of processes, the DNLC on Solaris 2.5.1 would have 34,906 entries. On Solaris 2.6, using the same kernel tuning parameters, the DNLC could contain 139,624 entries. In Solaris 8, the calculation of this parameter had been changed to be more similar to the Solaris 2.5.1 method. Performance on the new Solaris 2.6 system was horrible. File deletions on Network File Systems (NFS) took a very long time to complete, and it required a great deal of time to diagnose the problem. As it turns out, for some reason that I still don't fully understand, if one attempts to delete a file over NFS, and the DNLC is completely full, the operating system makes a linear traversal of the table to find the appropriate entry. The more entries the table holds, the longer this traversal takes. If it has nearly 140,000 entries, this operation can take considerable time. With the same /etc/system parameters on similar hardware running Solaris 2.5.1, these lookups did not cause a noticeable problem. In my case, a colleague who had encountered this problem before suggested setting ncsize explicitly to a more moderate value (we chose 8192) in the /etc/system file. We then rebooted the system, and performance improved dramatically. This is a pretty exotic example, but it indicates the following points:
This also leads to some obvious conclusions:
This example was not intended to criticize Solaris. Probably every operating system vendor makes comparable changes from release to release, and many similar stories could have been told that focus on other vendors. There isn't enough space in this book to cover general troubleshooting methodology, but one aspect should be mentioned because it causes many people difficulty. In trying to focus on a problem, the troubleshooter often assembles a great deal of data. Some of it is relevant to the problem at hand, and some of it is tangential. Determining which data are relevant and which aren't often poses the most difficult aspect of solving a problem. There's no magic to categorizing data in this way; rather, experience and instinct take over. However, when faced with a problem that isn't easily solved, it's often helpful to ask, "What would I think of this problem if any one of the facts involved was removed from the equation?" If one arrives at a conclusion that can be tested, it is often worthwhile to do so, or at least to reexamine the datum in question to make sure it is valid. This sort of analysis is difficult to do well, but in very difficult situations, it can prove a fruitful line of attack. Another troubleshooting technique is preventive in nature baselining the system. To understand what's going wrong when the system behaves poorly, it is crucial to understand how the server should behave when the system performs correctly. One cannot overemphasize this point. On a performance-critical server, an administrator should record data using each diagnostic tool that might be employed during a crisis when the server is in the following states:
Then, when the server begins to perform badly, one can determine what has changed on the system. "What is different about the overloaded system from the state where it is heavily loaded, but providing quality service?" This is a much easier question to answer than the more abstract, "Why is this server performing poorly?" The complexity of today's operating systems exacerbates this need. On most contemporary operating systems, it's much more difficult to tell the difference, for example, between normal memory paging activity and desperation swapping. It's difficult to know objectively what a reasonable percentage of output packet errors on a network interface would be. It's difficult to tell objectively how many mail.local processes should be sleeping, waiting for such esoterica as an nc_rele_lock event to wake them up. As with people, on computer systems many forms of unusual behavior can be measured only in relative terms. Without a baseline, this identification can't happen. Previously, I mentioned how important it is to distinguish information related to a present problem from incidental information. Without a baseline, it can be difficult if not impossible to tell whether a given piece of information is even out of the ordinary. When something goes wrong, while looking for the source of the problem we've all encountered something unexpected and asked ourselves, "Was this always like that?" Baselining reduces the number of times this uncertainty will arise in a crisis, which should lead to faster problem resolution. Run baseline tests periodically and compare their results against previous test runs. Going the extra mile and performing a more formal trend analysis can prove very valuable, too. It offers two benefits. First, it enables one to spot situations that slowly are evolving into problems before they become noticeable. Of course, not all changes represent problems waiting to happen, but trend analysis can also spot secular changes in the way a server operates, which may indicate new patterns in user behavior or changes in Internet operation. Second, formal trend analysis allows administrators to become more familiar with the servers they are charged with maintaining, which is unequivocally a good thing. More familiarity means problems are spotted sooner and resolved more quickly. System administrators responsible for maintaining high-performance, critical servers who do not have time to perform these tasks are overburdened. In this case, when something fails not only will they be unprepared to deal with the crisis, but other important tasks will go unfulfilled elsewhere as a consequence. In the "old days," many guru-level system administrators could tell how, or even what, the systems in their charge were running by looking at the lights blink or listening to the disks spin or heads move. They could feel what was happening in the box. Today's trend toward less obtrusive and quieter hardware has been part and parcel of the considerable improvements made in hardware reliability. This is a good thing. However, through these hardware changes, as well as the aforementioned increasing operating system complexity and the much larger quantity of boxes for which a system administrator is responsible, we've largely lost this valuable feel for the systems we maintain. Now the data on the system state are likely the only window we have into the operational characteristics of these servers. It should be considered an investment to periodically get acquainted with the machines we maintain so as to increase the chance of finding problems before they become readily apparent, and to give us the insight necessary to reduce the time to repair catastrophic problems when they do occur. |