What to Tune

One of the difficulties in tuning your Web server is knowing exactly what to tune. For this reason it is vital that you monitor your Web servers before you make any adjustments to settings, hardware, or Web applications, or even upgrade to Windows 2000 and IIS 5.0. This point cannot be emphasized enough: gathering baseline information about your servers will allow you to understand their behavior and refine your Web server performance goals. You can use the Performance Monitor and the Performance Counters built in to the operating system and IIS to establish this baseline. Once you have gathered your baseline data, analyze it to determine what the underlying reasons for performance problems may be before making a change, whether it be adding RAM or adjusting internal IIS settings. Once you've made a change, remember to monitor the servers again. Any change you make may have unforeseen effects on other components in your system.

This topic is broken into seven sections: monitoring your hardware, security, monitoring your Web applications, Web-application tuning issues, tools you can use to monitor and stress test your system, and settings internal to Windows 2000 and IIS 5.0 that affect Web server performance.

NOTE


All of these issues are interrelated. From upgrading hardware to modifying internal settings, tuning your Web server will require you to carefully monitor how any changes affect the performance of your Web server.

Monitoring Your Hardware

Memory

Problems caused by memory shortages can often appear to be problems in other parts of the system. You should monitor memory first to verify that your server has enough, and then move on to other components. To run Windows 2000 and IIS 5.0, the minimum amount of RAM a dedicated Web server needs is 128 MB, but 256 MB to 1 GB is often better. Additional memory is particularly beneficial to e-commerce sites, sites with a lot of content, and sites that experience a high volume of traffic. Since the IIS File Cache is set to use up to half of the available memory by default, the more memory you have, the larger the IIS File Cache can be.

NOTE


Windows 2000 Advanced Server can support up to 8 GB of RAM and Windows 2000 DataCenter can support up to 32 GB, but the IIS File Cache cannot take full advantage of this unless you partition the system. Partitioning the system may be useful in server consolidation scenarios. With 32 processors, you can partition Windows 2000 DataCenter into eight autonomous machines, each using 1.5 gigabytes of RAM for system cache. Furthermore, if you add the /3GB switch in c:\boot.ini, then inetinfo.exe can address up to 3 GB of memory; otherwise, it's limited to 2 GB of address space. In addition, every instance of dllhost.exe can address up to 2 GB, so if you had a sufficiently large system, the cumulative memory used by all the IIS-related processes could go beyond 4 GB. As ISAPIs and ASPs execute at medium isolation by default, they are in an instance of dllhost separate from inetinfo.exe. In addition, high-isolation applications each have their own dllhost.

The static file cache lives in the inetinfo process and can store up to 1.5 GB of content.1 The ASP caches live in each process hosting asp.dll. The caches draw upon the total memory available to the process hosting them.

To determine if the current amount of memory on your server will be sufficient for your needs, use the Performance tool (formerly known as PerfMon) that is built in to Windows 2000. The System Monitor, which is part of the Performance tool, graphically displays counter readings as they change over time.

Also, keep an eye on your cache settings—adding memory alone won't necessarily solve performance problems. You need to be aware of IIS cache settings and how they affect your server's performance. If these settings are inappropriate for the loads placed on your server, they, rather than a lack of memory, may cause performance bottlenecks. For more information about these cache settings, see "IIS Settings" in the "Features and Settings in Windows 2000 and IIS 5.0" section and the "Performance Settings" section. For a discussion about caching with ASP and IIS, see the "ASP Caching" section.

NOTE


When using Performance counters to monitor performance, you can see a description of any counter by selecting that counter in the Add Counters dialog and clicking Explain.

Log the following counters to determine if there are performance bottlenecks associated with memory:

  • Memory: Available Bytes. Try to reserve at least ten percent of memory available for peak use. Keep in mind that IIS 5.0 uses up to 50 percent of available memory for its file cache by default.
  • Memory: Page Faults/sec, Memory: Pages Input/sec, Memory: Page Reads/sec, and Memory: Transition Faults/sec. If a process requests a page in memory and the system cannot find it at the requested location, this constitutes a page fault. If the page is elsewhere in memory, the fault is called a soft page fault (measured by Transition Faults/sec). If the page must be retrieved from disk, the fault is called a hard page fault. Most processors can handle large numbers of soft faults without consequence. However, hard faults can cause significant delays. Page Faults/sec is the overall rate at which the processor handles faulted pages, including both hard and soft page faults. Pages Input/sec is the total number of pages read from disk to resolve hard page faults. Page Reads/sec is the number of times the disk was read to resolve hard page faults. Pages Input/sec will be greater than or equal to Page Reads/sec and can give you a good idea of your hard page fault rate. If these numbers are low, your server should be responding to requests quickly. If they are high, it may be because you've dedicated too much memory to the caches, not leaving enough memory for the rest of the system. You may need to increase the amount of RAM on your server, though lowering cache sizes can also be effective.
  • Memory: Cache Bytes, Internet Information Services Global: File Cache Hits %, Internet Information Services Global: File Cache Flushes, and Internet Information Services Global: File Cache Hits. The first counter, Memory: Cache Bytes, reveals the size of the File System Cache, which is set to use up to 50 percent of available physical memory by default. Since IIS automatically trims the cache if it is running out of memory, keep an eye on the direction in which this counter trends. The second counter is the ratio of cache hits to total cache requests and reflects how well the settings for the IIS File Cache are working. For a site largely made up of static files, 80 percent or more cache hits is considered a good number. Compare logs for the last two counters, IIS Global: File Cache Flushes and IIS Global: File Cache Hits, to determine if you are flushing objects out of your cache at an appropriate rate. If flushes are occurring too quickly, objects may be flushed from cache more often than they need to be. If flushes are occurring too slowly, memory may be wasted. See the ObjectCacheTTL, MemCacheSize, and MaxCachedFileSize objects in the "Performance Settings" section.
  • Page File Bytes: Total. This counter reflects the size of the paging file(s) on the system. The larger the paging file, the more memory the system commits to it. Windows 2000 itself creates a paging file on the system drive; you can create a paging file on each logical disk, and you can change the sizes of the existing files. In fact, striping a paging file across separate physical drives improves paging file performance. (Use drives that do not contain your site's content or log files.) Remember that the paging file on the system drive should be at least twice the size of physical memory so that the system can write the entire contents of RAM to disk if a crash occurs.
  • Memory: Pool Paged Bytes, Memory: Pool Nonpaged Bytes, Process (inetinfo): Virtual Bytes, Process (dllhost#n): Virtual Bytes, Process (inetinfo): Working Set, and Process (dllhost#n): Working Set. Memory: Pool Paged Bytes and Memory: Pool Nonpaged Bytes monitor the pool space for all processes on the server. The Virtual Bytes counters monitor the amount of virtual address space reserved directly by IIS 5.0, either by the Inetinfo process (in which the core of IIS runs) or by the Dllhost processes (in which isolated or pooled applications run) instantiated on your server. The Working Set counters measure the number of memory pages used by each process. Be sure that you monitor counters for all instances of Dllhost2 on your server; otherwise, you will not get an accurate reading of pool space used by IIS. The system's memory pools hold objects created and used by applications and the operating system. The contents of the memory pools are accessible only in privileged mode. That is, only the kernel of the operating system can directly use the memory pools; user processes cannot. On servers running IIS 5.0, threads that service connections are stored in the nonpaged pool along with other objects used by the service, such as file handles and sockets.

Besides adding more RAM, try the following techniques to enhance memory performance: improve data organization, try disk mirroring or striping, replace CGI applications with ISAPI or ASP applications, enlarge paging files, change the frequency of the IIS File Cache Scavenger, disable unnecessary features or services, and change the balance of the File System Cache to the IIS 5.0 Working Set. The last of these techniques is detailed later in this appendix.

For a detailed discussion list of Windows 2000 and IIS 5.0 settings that will affect these counter numbers, see the "Performance Settings" section.

Processor Capacity

With users demanding quick response time from Web sites and the increasing amount of dynamically generated content on these sites, a premium is placed on fast and efficient processor usage. Bottlenecks occur when one or more processes consume practically all of the processor time. This forces threads that are ready to be executed to wait in a queue for processor time. Adding other hardware, whether memory, disks or network connections, to try to overcome a processor bottleneck will not be effective and will frequently only make matters worse.

IIS 5.0 on Windows 2000 Server scales effectively across two to four processors, and with a little additional tuning scales well on eight processors. (See the "Tips for Getting the Most Out of an 8-Processor Machine" section.) Consider the business needs of your Web sites if you're thinking of adding more processors. For example, if you host primarily static content on your server, a two-processor computer is likely to be sufficient. If you host dynamically generated content, a four-processor setup may solve your problems. However, if the workload on your site is sufficiently CPU-intensive, no single computer will be able to keep up with requests. If this is the case for your site, you should scale it out across multiple servers. If you already run your site on multiple servers, consider adding more.

You should be aware, however, that the biggest performance gains with Windows 2000 and IIS 5.0 result from resolving inadequate memory issues. Before you make any decisions about changing the number of processors on your Web servers, rule out memory problems and then monitor the following Performance Counters.

  • System: Processor Queue Length. This counter displays the number of threads waiting to be executed in the queue that is shared by all processors on the system. If this counter has a sustained value of two or more threads, you have a processor bottleneck on your hands.
  • Processor: %Processor Time. Processor bottlenecks are characterized by situations in which Processor: % Processor Time numbers are high while the network adapter card and disk I/O remain well below capacity. On a multiprocessor computer, it's a good idea to examine the Processor: % Processor Time counter to pick up any imbalance.
  • Thread (Inetinfo/thread-number): Context Switches/sec, Thread (Dllhost/thread-number#process-instance ): Context Switches/sec, and System: Context Switches/sec. If you decide to increase the size of any of the thread pools, you should monitor the three counters listed here. Increasing the number of threads may increase the number of context switches to the point where performance decreases instead of increases. Ten context switches or more per request3 is quite high; if these numbers appear, consider reducing thread pool size. Balancing threads against overall performance as measured by connections and requests can be difficult. Any time you tune threads, follow-up with overall performance monitoring to see if performance increases or decreases. To determine if you should adjust the thread count, compare the number of threads and the processor time for each thread in the process to the total processor time. If the threads are constantly busy, but are not fully using the processor time, performance may benefit from creating more threads. However, if all the threads are busy and the processors are close to their maximum capacity, you are better off distributing the load across more servers rather than increasing the number of threads. See also the AspThreadGateEnabled and AspProcessorThreadMax metabase properties in the "Performance Settings" section.
  • Processor: Interrupts/sec and Processor: % DPC Time. Use these counters to determine how much time the processor is spending on interrupts and deferred procedure calls (DPCs). These two factors can be another source of load on the processor. Client requests can be a major source of each. Some new network adapter cards include interrupt moderation, which accumulates interrupts in a buffer when the level of interrupts becomes too high.

Scaling Out Across Multiple Computers

If processor problems persist, try scaling your site out across multiple computers using Network Load Balancing (NLB) or a hardware load balancer such as Microsoft's deployment and management tool, Application Center. While setting up a Web farm using one of these methods adds a layer of complexity and introduces a number of other issues, this action is likely to solve a number of your performance issues if your site is large enough. For more information about NLB, see Appendix B in the Application Center 2000 Resource Kit, "Network Load Balancing Technical Overview."

Network Capacity, Latency, and Bandwidth

Essentially, the network is the line through which clients send requests to your server. The time it takes for those requests and responses to travel to and from your server is one of the largest limiting factors in user-perceived server performance. This request-response cycle time is called latency, and latency is almost exclusively out of your control as a Web server administrator. For example, there is little you can do about a slow router on the Internet or the physical distance between a client and your server.

On a site consisting primarily of static content, network bandwidth is the most likely source of a performance bottleneck. Even a fairly modest server can completely saturate a T3 connection (45 Mbps) or a 100 Mbps Fast Ethernet connection. You can mitigate some of these issues by tuning the connection you have to the network and maximizing your effective bandwidth as best you can.

The simplest way to measure effective bandwidth is to determine the rate at which your server sends and receives data. There are a number of Performance Counters that measure data transmission in many components of your server. These include counters on the Web, FTP, and STMP services, the TCP object, the IP object, and the Network Interface object. Each of these reflects different Open System Interconnectivity (OSI) layers. For a detailed list of these counters and their analysis, see the Internet Information Services 5.0 Resource Guide, released with the Windows 2000 Server Resource Kit. In particular, see the Network I/O section of the "Monitoring and Tuning Your Server" chapter. To start, however, use the following counters:

  • Network Interface: Bytes Total/sec. To determine if your network connection is creating a bottleneck, compare the Network Interface: Bytes Total/sec counter to the total bandwidth of your network adapter card. To allow headroom for spikes in traffic, you should usually be using no more than 50 percent of capacity. If this number is very close to the capacity of the connection, and processor and memory use are moderate, then the connection may well be a problem.
  • Web Service: Maximum Connections and Web Service: Total Connection Attempts. If you are running other services on the computer that also use the network connection, you should monitor the Web Service: Maximum Connections and Web Service: Total Connection Attempts counters to see if your Web server can use as much of the connection as it needs. Remember to compare these numbers to memory and processor usage figures so that you can be sure that the connection is the problem, not one of the other components.

See the "Tuning and Troubleshooting Suggestions" section later in this appendix for suggestions on how to reduce bandwidth usage by reducing file sizes and by enabling proxy and client caching.

Disk Optimization

Since IIS 5.0 writes logs to disk, there is regular disk activity even with 100 percent client cache hits. Generally speaking, if there is high disk read or write activity other than logging, other areas of your system need to be tuned. For example, hard page faults cause large amounts of disk activity, but they are indicative of insufficient RAM.

Accessing memory is faster than disk seeks by a factor of roughly 1 million (nanoseconds versus milliseconds); clearly, searching the hard disk to fill requests will degrade performance. The type of site you host can have a significant impact on the frequency of disk seeks. If your site has a very large file set that is accessed randomly, if the files on your site tend to be very large, or if you have a very small amount of RAM, then IIS is unable to maintain copies of the files in RAM for faster access.

Typically, you will use the Physical Disk counters to watch for spikes in the number of disk reads when your server is busy. If you have enough RAM, most connections will result in cache hits unless you have a database stored on the same server, and clients are making dissimilar queries. This situation precludes caching. Be aware that logging can also cause disk bottlenecks. If there are no obvious disk-intensive issues on your server but you see lots of disk activity anyway, you should immediately check the amount of RAM on your server to make sure you have enough memory.

To determine the frequency of disk access, log the following counters:

  • Processor: % Processor Time, Network Interface Connection: Bytes Total/sec, and PhysicalDisk: % Disk Time. If all three of these counters have high values, then the hard disk is not causing a bottleneck for your site. However, if the % Disk Time is high and the processor and network connection are not saturated, then the hard disk may be creating a bottleneck. If the Physical Disk performance counters are not enabled on your server, open a command line and use the diskperf -yd command.

Security

Balancing performance with users' concerns about the security of your Web applications is one of the most important issues you will face, particularly if you run an e-commerce Web site. Since secure Web communication requires more resources than nonsecure Web communications, it is important that you know when to use various security techniques, such as the SSL protocol or IP address checking, and when not to use them. For example, your home page or a search results page most likely doesn't need to be run through SSL. However, when a user goes to a checkout or purchase page, you will want to make sure that page is secure.

If you do use SSL, be aware that establishing the initial connection is five times as expensive as reconnecting by using security information in the SSL session cache. The default timeout for the SSL session cache has been changed from two minutes in Microsoft Windows NT 4.0 to five minutes in Windows 2000. Once this data is flushed, the client and server must establish a completely new connection. If you plan on supporting long SSL sessions, consider lengthening this timeout with the ServerCacheTime registry setting (described in the "Performance Settings" section). If you expect thousands of users to connect to your site by using SSL, a safer approach is to estimate how long you expect SSL sessions to last, and then set the ServerCacheTime parameter to slightly longer than your estimate. Do not set the timeout much longer than this or your server may leave stale data in the cache. Also, make sure that HTTP Keep-Alives are enabled (on by default). SSL sessions do not expire when used in conjunction with HTTP Keep-Alives unless the browser explicitly closes the connection.

In addition to all security techniques having performance costs, Windows 2000 and IIS 5.0 security services are integrated into a number of operating system services. This means that you can't monitor security features separately from other aspects of those services. Instead, the most common way to measure security overhead is to run tests comparing server performance with and without a security feature. The tests should be run with fixed workloads and a fixed server configuration so that the security feature is the only variable. During the tests, you probably want to measure the following:

  • Processor Activity and the Processor Queue—Authentication, IP address checking, SSL protocol, and encryption schemes are security features that require significant processing. You are likely to see increased processor activity, both in privileged and user mode, and an increase in the rate of context switches and interrupts. If the processors in the server are not sufficient to handle the increased load, queues are likely to develop. Custom hardware, such as cryptographic accelerators, may help here.
  • Secure Sockets Layer—If the SSL protocol is being used, lsass.exe may consume a surprising amount of CPU. This is because SSL processing occurs here. This means that administrators used to monitoring CPU usage in Windows NT may see less processor consumed by Inetinfo.exe and more consumed by Isass.exe.
  • Physical Memory Used—Security requires that the system store and retrieve more user information.
  • Network Traffic—You are also likely to see an increase in traffic between the IIS 5.0-based server and the domain controller used for authenticating logon passwords and verifying IP addresses.
  • Latency and Delays—The most obvious performance degradation resulting from complex security features like SSL is the time and effort involved in encryption and decryption, both of which use lots of processor cycles. Downloading files from servers using the SSL protocol can be 10 to 100 times slower than from servers that are not using SSL.

If a server is running IIS 5.0 and working as a domain controller, the proportion of processor use, memory, and network and disk activity consumed by domain services is likely to increase the load on these resources significantly. The increased activity can be enough to prevent IIS 5.0 services from running efficiently. For enhanced performance and for security reasons, it is highly recommended that you refrain from running a high-traffic Web server on a domain controller. For more information, see the Domain Controller topic in you Windows online documentation.

Monitoring Your Web Applications

Upgrading a poorly written application to one that is well designed and has been thoroughly tested can improve performance dramatically (sometimes as much as thirtyfold). Keep in mind, however, that your Web applications may be affected by back-end latencies (for example, legacy systems such as AS/400). Remote data sources may cause performance problems for any number of reasons. If developers design applications to get data from another Web site and that Web site crashes, it can cause a bottleneck on your server. If applications are accessing a remote Microsoft SQL Server database, the database may have problems keeping up with requests sent to it. While you may be the administrator of your site's SQL database, it can be difficult to monitor these servers if they are remotely located. Worse, you may have no control over the database servers, or other back-end servers. If you can, monitor the back-end servers that work with your applications and keep them as well tuned as you do your Web server.

To determine if your Web applications are creating a bottleneck on your server, monitor the following performance counters:

  • Active Server Pages: Requests/Sec, Active Server Pages: Requests Executing, Active Server Pages: Request Wait Time, Active Server Pages: Request Execution Time, and Active Server Pages: Requests Queued. If you are running ASP applications on your server, these counters can provide you with a picture of how well the applications are performing. Active Server Pages: Requests/Sec does not include requests for static files or other dynamic content and will fluctuate considerably based on the complexity of the ASP pages and the capacity of your Web server. If this counter is low during spikes in traffic on your server, your applications may be causing a bottleneck. Requests Executing indicates the number of requests currently executing (i.e., number of active worker threads); Request Wait Time indicates the number of milliseconds the most recent request was waiting in the queue, and Request Execution Time indicates how many milliseconds the most recent request took to execute. Ideally, Requests Queued and Request Wait time should remain close to zero, but they will go up and down under varying loads. The maximum number for Requests Queued is determined by the metabase setting for AspRequestQueueMax. If the limit is reached, client browsers will display "HTTP 500/ ServerToo Busy." If these numbers deviate a great deal from their expected range, your ASP applications will likely need to be rewritten to improve performance. Request Execution Time can be somewhat misleading because it is not an average. For example, if you regularly receive 30 requests for a page that executes in 10 milliseconds (ms) to every one request for a 500-ms page, the counter is likely to indicate 10 ms, although the average execution time is over 25 ms. It's hard to say what a good value for Requests Executing is. If pages execute quickly and don't wait for I/O (loading a file or making a database query), this number is likely to be low (little more than the number of processors when the machine is busy). If pages must wait for I/O, the number of pages executing is likely to be higher (anything up to AspProcessorThreadMax multiplied by the number of processors multiplied by the number of processes hosting ASP). If Requests Executing is high, Requests Queued is large, and the CPU utilization is low, you may need to increase AspProcessorThreadMax. When enabled, thread gating seeks to optimize Requests Executing. (See the "IIS Settings" section.) The user's response time is proportional to Request Wait Time plus Request Execution Time plus network latency.

NOTE


These counters are the sum of the ASP performance counters for each process hosting ASP, and there is no way to break them down by process.

  • Web Service: CGI Requests/sec and Web Service: ISAPI Extension Requests/Sec report the rates at which your server is processing CGI and ISAPI application requests. If these values drop while under increasing loads, you may need to have the application developers revisit their code.

NOTE


ASP is an ISAPI Extension and is included by the second counter.

  • Web Service: Get Requests/sec and Web Service: Post Requests/Sec reflect the rate at which these two common HTTP request types are being made to your server. POST requests are generally used for forms and are sent to ISAPIs (including ASP) or CGIs. GET requests account for almost all other requests from browsers and include requests for static files, requests for ASPs and other ISAPIs, and CGI requests.

Tuning Your Web Applications

IIS 5.0 is typically very efficient at serving static HTML pages with the out-of-the-box settings. If your site hosts primarily static content, many performance problems can be hardware related. IIS 5.0 offers improved performance for Web applications, but some additional tuning may be required to optimize performance. Of course, issues about best practices in Web application design and coding remain, regardless of improvements in server software. While this appendix does not attempt to discuss the intricacies of tuning your Web applications, this section does provide some pointers and recommendations to make them perform faster. Consider the following suggestions while planning and testing your Web applications before running them on your production servers.

First of all, ISAPI applications run faster than Active Server Pages (ASP) applications, although there is much less developer overhead for ASP. Both of these types of applications run faster than equivalent CGI applications.

Second, you should use static files wherever you can because they don't have the processing load or cause the disk activity that dynamic files do. In conjunction with this, your applications should push as much of the processing load onto the client as possible in order to avoid network latencies. This also saves on server-side resources and allows changes to appear instantaneously to the user. A common example is adding client-side code to validate that forms have been filled out with good data, such as checking that email addresses are well formed or credit card numbers have a correct checksum.

Another tactic is to make sure that debugging for ASP is turned off on your production servers. If debugging is enabled, you will need to set the AppAllowDebugging metabase property to FALSE. For more information, see the "Performance Settings" section.

Set Expires headers for all images and for HTML wherever possible to allow both to be stored in the client's cache. For more information, see the "Tuning and Troubleshooting Suggestions" section later in this appendix.

Do not store apartment-threaded components in ASP Application and Session state. This includes all Microsoft Visual Basic components, but not Java or most C++ objects.

Use Secure Sockets Layer (SSL) only when necessary. Using the HTTPS protocol is much more expensive than standard HTTP. Be sure that the information being sent, such as credit card numbers or medical information, is sensitive enough to warrant the added expense. For more information about security tuning issues, see the "Security" section earlier in this appendix.

Process isolation will affect Web application performance as well. IIS 5.0 Web applications run in the out-of-process pool (medium protection) by default. It is safer to take the performance impact of process isolation than to risk server downtime and data loss that can be caused by a low-isolation application crashing the Inetinfo process. For a more in-depth discussion of this topic, see the "Process Isolation" section later in this appendix.

To enhance database-driven performance in a production environment, use Microsoft SQL Server 2000. Because both IIS and SQL Server perform best with plenty of memory, try storing the database on a separate server from the Web service. In this situation, communication across computer boundaries is frequently faster than communication on a single computer. Performance degradation due to a lack of memory and insufficient cycles often occurs when both SQL Server and IIS reside on the same server. Also, be sure to create and maintain good indexes. This will minimize I/O on your database queries. Last but not least, take advantage of stored procedures. They take much less time to execute and are easier to write than an ASP script designed to do the same task.

As a rule of thumb, if you have an ASP script that's more than 100 lines long (counting lines of code in files brought in using the #include directive), consider creating a COM+ component to provide the same function. If written efficiently and debugged properly, COM+ components can offer 20 to 30 times the processing speed of a script for the same dynamic page. The easiest way to measure the size of an ASP script with #includes is to change the file extension of the page from .asp to .stm and open the .stm file with your browser. Use your browser's View Source command to display the .asp file and lines of code from the included files.

For you to maximize performance for your dynamic Web applications, it is very important to stress test your applications before you make them live on your site. If your production Web server is a multiprocessor system, it is important to stress test on a multiprocessor system. This will help identify multiprocessor scaling problems and race conditions in your script and components. A very good tool for doing this is the Web Application Stress (WAS) tool, which can be downloaded from the Microsoft Web Application Stress Tool site (http://webtool.rte.microsoft.com/). Included at this site are a tutorial and a knowledge base dedicated to the tool. WAS is also included on the Windows 2000 Resource Kit companion CD.

For more information about tools for measuring the performance of your Web servers and applications, see the "Tools to Monitor and Test Server Performance" section. For a list of links and references about Web application performance and the tools to test that performance, see the "Resources" section at the end of this appendix.

Tools to Monitor and Test Server Performance

To support your performance tuning and testing needs, Microsoft offers a number of tools: some included with Windows 2000 and IIS 5.0, others offered on the Windows 2000 Resource Kit CD, and still others downloadable from the Microsoft Web site. The System Monitor (formerly known as PerfMon) is built in to Windows 2000 and is essential to monitoring nearly every aspect of server performance. Process and Thread Status (pstat.exe) shows the status of all running processes and threads. Process Tree (ptree.exe) allows you to query the process inheritance tree and kill processes on local or remote computers. These tools are available on the Windows 2000 Server Resource Kit companion CD. The HTTP Monitoring Tool, available on the companion CD to the Windows 2000 Resource Kit, monitors HTTP activity on your servers and can notify you if there are changes in the amount of activity. Network Monitor is a Windows 2000 administrative tool you can use to keep tabs on network traffic. It is not installed by default, but you can install it by using the Add/Remove Programs feature of the Control Panel.

NOTE


A lightweight version of Network Monitor comes with Windows 2000 Server; the full-featured version is available with Microsoft Systems Management Server. NetStat is a command line tool that detects information about your server's current network connections.

At the center of these tools are the Performance Counters that are built into IIS 5.0 and the Windows 2000 operating system. Developers can also include custom Performance Counters in the ISAPI DLLs or COM components that they write. These counters can be read directly by a number of the tools mentioned above, including System Monitor, the Web Application Stress Tool, and WCAT. A number of these counters have been mentioned throughout this document; it is important to know which are pertinent to your monitoring and testing requirements.

System Monitor is the single most important tool to establish a baseline of performance on your Web server and monitor the effects on performance of any changes you make to software or hardware. System Monitor provides a UI that allows you to see performance counter readings whether you are monitoring or logging them. It also allows you to graphically log counter activity and set alerts that will appear in Event Viewer. System Monitor provides documentation for each counter in your system.

The Web Application Stress tool is designed specifically to simulate multiple browsers requesting pages from a Web site. You can use this tool to gather information about the performance and stability of your Web applications and about how your servers are performing. This tool simulates a large number of requests with a relatively small number of client machines. The goal is to create an environment as similar to a production environment as possible. This allows you to find and eliminate problems in your Web server and applications prior to deploying them on your production servers.

For more information on any of these tools, see the online IIS 5.0 documentation included in the Windows 2000 Resource Kit. Links to other sources of information are included in the "Resources" section.

Features and Settings in Windows 2000 and IIS 5.0

If you currently have a well-tuned Web site running on Windows NT Server 4.0 with IIS 4.0, that site should perform well on Windows 2000 Server and IIS 5.0. For more information, see the white paper, "Windows 2000 Performance: An Overview," which is located at http://www.microsoft.com/windows2000/guide/platform/performance/overview.asp. You will want to monitor your server and site as you make the transition. You should be aware of some new features in Windows 2000 and IIS 5.0 that are designed for better performance and ease of administration. In addition, there are some changes in default settings in IIS 4.0 to IIS 5.0. This section discusses these features and changes.

Setting Windows 2000 as an Application Server

If you plan to use your server primarily as a Web server, setting up your server computer as an application server is a quick way to improve performance. This allows you to take advantage of better SMP scalability, improved networking performance, and support for more physical memory for your Web applications. In addition, you can use the transaction-processing capabilities of COM+ as a transaction monitor to improve performance of database applications. Windows 2000 Server installs as a file server by default, so you should make sure to select the application server during the installation process. If you don't, however, it is easy to configure your server as an application server after installation. To do so:

  1. Click Start, point to Settings, and click Network and Dial-up Connections.
  2. Select Local Area Connection, and open its properties.
  3. Select File and Printer Sharing for Microsoft Networks, and open its properties.
  4. On the Server Optimization tab, select Maximize data throughput for network applications.

This configuration will not take effect until you reboot the server.

IISReset Utility

IIS 5.0 offers a number of new features and default settings to help make Web sites that run on it more reliable and easier to administer. The first of these is the new IISReset.exe, a utility that allows you to stop and restart IIS services without rebooting your computer. By default, IISReset will restart your services if they fail. You can also use IISReset to remotely start, stop, or pause your services, and to reboot your server computer if necessary. You should reboot only as a last resort. If you restart your Web service with IISReset, users will experience a small pause, during which they have only to hit refresh to get a new page. If the entire computer is rebooted, the unavailability is longer. You can also isolate the services that you stop. For example, if you are running an SMTP server on the same computer as your Web server, you can choose to simply stop and restart your Web service rather than taking down the SMTP services as well.

You should be aware that frequent reboots and resets will compromise the integrity of your performance data. If you are using IISReset to automatically restart services, it may mask this problem, so you should always monitor the Event Log for restarts.

IIS Settings

The AspProcessorThreadMax metabase property has changed. Formerly called ProcessorThreadMax and stored in the registry in IIS 4.0, its default value was 10. The new default value in IIS 5.0 is 25. This setting is per processor and per process. On a dual-processor system, the number of worker threads in each process can be up to twice as high as the AspProcessorThreadMax value, or up to 50 worker threads (with the default settings). If you are running several high-isolation ASP applications, each process will have an independent set of worker threads.

NOTE


ASP starts out with a number of worker threads that is equal to the number of processors plus seven. It creates more threads when the size of the ASP request queue passes certain thresholds.

The AspThreadGateEnabled property has been added to the metabase. It is off by default. If you turn this property on, IIS performs thread gating, which dynamically controls the number of concurrently executing threads in response to varying load conditions. When processor utilization drops below 50 percent, which could indicate that threads are blocked (for example, while waiting for an external database to return the results of a query) or simply that the load is light, IIS 5.0 increases the number of active threads so that other requests can be serviced in a timely manner. When processor utilization exceeds 80 percent, indicating a heavy load, IIS 5.0 deactivates threads to reduce the amount of context switching. Both lower and upper limits can be set: AspThreadGateLoadLow defaults to 50 percent, while AspThreadGateLoadHigh defaults to 80 percent. Regardless of the value of AspThreadGateEnabled, an ASP process will never have more worker threads than the number of processors multiplied by AspProcessorThreadMax.

For sites that do a lot of ASP processing, it is best to test performance with thread gating turned on and with it turned off to see what the effects are. Make your final decision based on your observations. For sites that are made up primarily of static files, turn the setting on and monitor server performance to see if throughput and response time are improved.

IIS 5.0 has also changed the default behavior of the ASP Template Cache. In IIS 4.0, the ASP Template Cache limit defaulted to –1. With this setting, this cache could grow arbitrarily large. On Web sites with lots of ASP content, the ASP Template Cache tended to fill all of the RAM in the server. In contrast, the IIS 5.0 default limit is 250 files. Because each site has its own requirements, you should reset the limit to meet your site's particular needs. Perhaps the easiest way to accomplish this is to monitor performance as you increase and decrease the value. Because an entry in this cache can be pointed to by one or more entries in the ASP Script Engine Cache, and because best performance occurs if the scripts in ASP pages are found in the ASP Script Engine Cache, you should never set the limit on the ASP Template Cache to zero. Doing so prevents any hits on the ASP Script Engine Cache, because the ASP Script Engine Cache entry for a particular .asp file can only be referenced through its template. Thus, if no templates are cached, the ASP Script Engine Cache is rendered useless. ASP Script Engine Cache hits provide better performance than hits on the ASP Template Cache, so if you make ASP Script Engine Cache hits impossible, performance suffers badly unless all your pages are static. From IIS 4.0 to IIS 5.0, the ASP Script Engine Cache limit has been upped from 30 to 125 files. To determine if you need to change your cache settings, you should keep an eye on response times, the number of ASP requests in the queue, the number of context switches, and the amount of CPU utilization.

NOTE


The ASP Script Engine Cache setting should be at least equal to one more than the number of CPUs on your server, multiplied by the AspProcessorThreadMax setting. It is probably too small for most 8 processor systems.

Also, you should consider adjusting the default settings for the IIS File Cache. You can add these settings to the registry to modify IIS 5.0 default behavior. The first setting you should consider adding is the MemCacheSize object; if this is not present in the registry, the default behavior is to allow the cache to grow to a maximum of half the available physical memory. This ensures that IIS interacts well with other applications on machines that are not dedicated Web servers. Try upping this limit (specified in MB) and monitoring performance to see if there are any gains. The second registry object you should consider adding is MaxCachedFileSize. The IIS default behavior is to allow a maximum file size of 256 KB in the cache. If you have a site that has several large JPEG files that are accessed regularly, you may want to experiment with bumping this limit up to determine if caching files larger than 256 KB will work for your site. But be aware that if file sizes are around 200 to 300 KB, you'll reach a point of diminishing returns when caching them. For smaller files, the overhead of reading from disk rather than the IIS File Cache is significant. For larger files, you won't get much performance improvement; you're more likely to just waste memory. IIS regularly purges from the cache files that have not been requested recently (within the last 30 seconds, by default). The threshold is determined by the ObjectCacheTTL (TTL stands for Time To Live) registry setting; by default this is not present in the registry. If you have plenty of memory, it may be effective to adjust this TTL upwards.

For a discussion of how IIS and ASP use caches to process incoming requests, see the "ASP Caching" section later in this appendix.

Process Isolation

IIS 4.0 introduced the concept of running Web applications out of process. This feature created greater stability for Web servers, but at a significant performance cost. In IIS 5.0, the performance of out-of-process applications has improved, especially for ASP. Some performance degradation remains, however, in comparison with IIS 5.0 in-process applications. In addition to improved performance, the concept of running applications out of process has been expanded. You can now run Web applications in a pooled out-of-process environment.

Applications that run in the Web services process (Inetinfo.exe) result in higher performance, but there is a greater risk that a misbehaving application can make the Web services become unavailable. The recommended configuration is to run Inetinfo.exe in its own process, run mission-critical applications in their own processes (high protection), and run remaining applications in a shared, pooled process (medium protection). For the best performance and reliability, run ASP applications in medium protection and configure any COM+ components as library applications, not server applications.

If you decide to run your application as a separate process, or with other applications in a single pooled process, you will need to select High (Isolated) or Medium (Pooled) from the Application Protection drop-down list on the Home Directory or Virtual Directory property sheet. You should first create an application directory and designate it as either a Home Directory or Virtual Directory, if you haven't already done so. By default, all new applications are run in medium protection. You can run a very large number of applications at medium isolation, but you will only be able to run a few dozen applications at high isolation, because each process consumes significant resources.

For more information about these registry settings and metabase properties, see the "Performance Settings" section. For more information about the features mentioned in this section, see the IIS 5.0 and Windows 2000 online documentation.

Tuning and Troubleshooting Suggestions

If you determine that you need to address specific hardware-driven performance issues, consider using the following suggestions.

  • Upgrade to Larger L2 Caches. If you determine that you need to add or upgrade processors, choose processors with a large secondary (L2) cache. Server applications, such as IIS, benefit from a large processor cache because their instruction paths involve many different components and they need to access a lot of data. A large processor cache (2 MB or more if it is external, up to the maximum available if it is on the CPU chip) is recommended to improve performance on active servers running IIS 5.0.
  • Upgrade to Faster CPUs. Web applications particularly benefit from faster processors.
  • Set Aggressive Connection Timeouts. To combat network latency as much as you can, set aggressive connection timeouts. This is particularly important if you run a high traffic Web site. Open connections degrade performance. The ConnectionTimeout metabase property is set to 15 minutes by default. For more information on this property see the "Performance Settings" section.
  • Use Expires Headers. Set Expires headers on both static and dynamic content to allow both types of content to be stored in the client's cache. This makes for faster response times, places less load on the server, and less traffic on the network. For example, you could create a header that specifies not to download your company's logo .jpg file if the user has already visited your site. To set Expires headers for static content, use the HTTP Headers property sheet. To set Expires headers for ASP pages, use the Response.AddHeader method. For more information on this method, see the IIS 5.0 online documentation.
  • Make Sure That ASP Buffering Is Enabled. ASP buffering is on by default after a clean install of Windows 2000. If you have upgraded from Windows NT 4.0 and IIS 4.0, you may need to turn it on. ASP buffering allows all output from the application to be collected in the buffer before being sent across the network to the client browser. This cuts down on network traffic and response times. Although buffering reduces response times, it may leave users with the perception that the page is slower and less interactive, as they see no data until the page has finished executing. Judicious use of Response.Flush can improve the perception of interactivity. For more information about the Response.Flush method, see the IIS 5.0 online documentation and the discussion in the ASP Tips article listed in the "Resources" section. See also the AspBufferingOn metabase entry in the "Performance Settings" section.
  • Lengthen Connection Queues and Use HTTP Keep-Alives. If you determine that your server does not have adequate bandwidth to meet demand and you are planning for increasing request loads, you can make better use of network bandwidth by doing two things: lengthening connection queues and verifying that HTTP Keep-Alives are enabled.

    Each IIS 5.0 service (FTP, Web, etc.) has a connection queue, which is set to 15 entries. If this number, under load, does not meet your needs, you can increase it by adding a ListenBackLog parameter to the registry and setting the value to the maximum number of connection requests you want the server to maintain. For more information, see the "Performance Settings" section.

    HTTP Keep-Alives maintain a client's connection to the server even after the initial request is complete. This feature reduces latency, reduces CPU processing, and optimizes bandwidth. HTTP Keep-Alives are enabled by default. To reset them if they have been disabled, select a site in the Internet Services Manager, open the site's Properties sheet, click the Performance tab, and select the HTTP Keep-Alives check box.

  • Reduce File Sizes. You can increase the performance of your Web server by reducing the sizes of the files being served. Image files should be stored in an appropriate compressed format. Cropping images and reducing color depths will also reduce file sizes. Limit the number of images and other large files where possible. You can also reduce file size by tightening up HTML and ASP code. Remove redundant blocks of code from ASP pages and make sure your HTML documents are efficiently authored. In particular, good use of Cascading Style Sheets can reduce the amount of HTML needed in individual pages.
  • Store Log Files on Separate Disks and Remove Nonessential Information. If your server hosts multiple sites, a separate log file is created for each site; disk writes for these logs can create a bottleneck on your server. Try storing logs on a separate partition or disk from your Web server. Another way to reduce disk bottlenecks is to avoid logging nonvital information. For example, you could place all of your image files in a virtual directory (for example, /images) and disable logging for that directory. To do this, open the property sheet of the directory, clear the Log visits check box, and then click OK. You could also use scripts or ISAPI filters to do this pruning. If yours is a particularly busy or large site, this can save you up to several gigabytes of disk space per day and significant log post-processing time.
  • Use RAID and Striping. To improve disk access, use a redundant array of independent drives (RAID) and striped disk sets. You may also want to consider a drive controller with a large RAM cache. If your site relies on frequent database access, move the database to another computer.
  • Defragment Your Disks. Access times grow longer as disks become more fragmented. Windows 2000 comes with a Disk Defragmenter.
  • Use CPU Throttling If Necessary. IIS 5.0 introduces two new features that deal with rogue applications: process accounting, which logs the CPU and other resources used by a Web site, and process throttling, which limits the amount of resources a Web site can consume.

Process accounting and process throttling work for both CGI (Common Gateway Interface) applications and for applications that are run out of process. You cannot activate accounting for in-process applications or for applications run in the new IIS 5.0 out-of-process pool (medium protection).

To turn on process accounting

  1. In the Internet Services Manager, select the Web site that you want to set up process accounting on.
  2. Open the site's property sheet, and click the Home Directory tab.
  3. In the Application Settings box, select High (Isolated).
  4. On the site's property sheet, click the Web Site tab, and make sure Enable Logging is selected.
  5. On the Web Site property sheet, click the Logging Properties button, and select Process Accounting.

The first two steps set the Web site to run out of process and the last two steps activate process accounting for that site.

For example, if you are an ISP and one of your customer sites is using more than its share of CPU time, you can activate process accounting and extend logging so that figures for the Job Object counters are recorded. With the information gathered from process accounting, you can then decide whether to upgrade the servers in your installation, to adjust the billing for this particular customer, or to limit the amount of resources the site can consume.

After determining the amount of resources the customer's site is consuming, you might want to limit that customer to a certain percentage of your available resources. This will free up resources for other customers. To limit a site's resources, run the site's applications out of process, and then turn on process throttling as follows:

  1. On the site's property sheet, click the Performance tab.
  2. Select Enable process throttling.
  3. In the Maximum CPU use box, set the percentage of CPU resources dedicated to the site.
  4. Select Enforce limits.

When the site reaches a predefined limit, it will take the defined action, such as reducing process priority, halting processes, or halting the site. Be aware that the site may actually exceed the apparent processor use limit if virtual directories within a throttled site are configured as in-process or pooled-process applications. In-process and pooled-process applications are not affected by processor throttling and are not included in process accounting statistics.

  • The following techniques will help you determine if you need to use processor throttling: log the Processor: % Processor Time, Web Service: Maximum CGI Requests, and Web Service: Total CGI Requests counters; enable process accounting so that Job Object counters are included in IIS logs; and examine the Dllhost object counters to determine the number of out-of-process WAM and ISAPI requests.

You should keep in mind that process throttling could sometimes backfire. Because the throttled Dllhost process is running at a lower priority, it won't respond quickly to requests from the Inetinfo process. This can tie up several I/O threads, harming your server's overall responsiveness. As always after making any kind of change, monitor your server closely once you have set up process throttling to see what effects it has on performance.



Microsoft Application Center 2000 Resource Kit 2001
Microsoft Application Center 2000 Resource Kit 2001
ISBN: N/A
EAN: N/A
Year: 2004
Pages: 183

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net