One of the difficulties in tuning your Web server is knowing exactly what to tune. For this reason it is vital that you monitor your Web servers before you make any adjustments to settings, hardware, or Web applications, or even upgrade to Windows 2000 and IIS 5.0. This point cannot be emphasized enough: gathering baseline information about your servers will allow you to understand their behavior and refine your Web server performance goals. You can use the Performance Monitor and the Performance Counters built in to the operating system and IIS to establish this baseline. Once you have gathered your baseline data, analyze it to determine what the underlying reasons for performance problems may be before making a change, whether it be adding RAM or adjusting internal IIS settings. Once you've made a change, remember to monitor the servers again. Any change you make may have unforeseen effects on other components in your system.
This topic is broken into seven sections: monitoring your hardware, security, monitoring your Web applications, Web-application tuning issues, tools you can use to monitor and stress test your system, and settings internal to Windows 2000 and IIS 5.0 that affect Web server performance.
NOTE
All of these issues are interrelated. From upgrading hardware to modifying internal settings, tuning your Web server will require you to carefully monitor how any changes affect the performance of your Web server.
Problems caused by memory shortages can often appear to be problems in other parts of the system. You should monitor memory first to verify that your server has enough, and then move on to other components. To run Windows 2000 and IIS 5.0, the minimum amount of RAM a dedicated Web server needs is 128 MB, but 256 MB to 1 GB is often better. Additional memory is particularly beneficial to e-commerce sites, sites with a lot of content, and sites that experience a high volume of traffic. Since the IIS File Cache is set to use up to half of the available memory by default, the more memory you have, the larger the IIS File Cache can be.
NOTE
Windows 2000 Advanced Server can support up to 8 GB of RAM and Windows 2000 DataCenter can support up to 32 GB, but the IIS File Cache cannot take full advantage of this unless you partition the system. Partitioning the system may be useful in server consolidation scenarios. With 32 processors, you can partition Windows 2000 DataCenter into eight autonomous machines, each using 1.5 gigabytes of RAM for system cache. Furthermore, if you add the /3GB switch in c:\boot.ini, then inetinfo.exe can address up to 3 GB of memory; otherwise, it's limited to 2 GB of address space. In addition, every instance of dllhost.exe can address up to 2 GB, so if you had a sufficiently large system, the cumulative memory used by all the IIS-related processes could go beyond 4 GB. As ISAPIs and ASPs execute at medium isolation by default, they are in an instance of dllhost separate from inetinfo.exe. In addition, high-isolation applications each have their own dllhost.
The static file cache lives in the inetinfo process and can store up to 1.5 GB of content.1 The ASP caches live in each process hosting asp.dll. The caches draw upon the total memory available to the process hosting them.
To determine if the current amount of memory on your server will be sufficient for your needs, use the Performance tool (formerly known as PerfMon) that is built in to Windows 2000. The System Monitor, which is part of the Performance tool, graphically displays counter readings as they change over time.
Also, keep an eye on your cache settings—adding memory alone won't necessarily solve performance problems. You need to be aware of IIS cache settings and how they affect your server's performance. If these settings are inappropriate for the loads placed on your server, they, rather than a lack of memory, may cause performance bottlenecks. For more information about these cache settings, see "IIS Settings" in the "Features and Settings in Windows 2000 and IIS 5.0" section and the "Performance Settings" section. For a discussion about caching with ASP and IIS, see the "ASP Caching" section.
NOTE
When using Performance counters to monitor performance, you can see a description of any counter by selecting that counter in the Add Counters dialog and clicking Explain.
Log the following counters to determine if there are performance bottlenecks associated with memory:
Besides adding more RAM, try the following techniques to enhance memory performance: improve data organization, try disk mirroring or striping, replace CGI applications with ISAPI or ASP applications, enlarge paging files, change the frequency of the IIS File Cache Scavenger, disable unnecessary features or services, and change the balance of the File System Cache to the IIS 5.0 Working Set. The last of these techniques is detailed later in this appendix.
For a detailed discussion list of Windows 2000 and IIS 5.0 settings that will affect these counter numbers, see the "Performance Settings" section.
With users demanding quick response time from Web sites and the increasing amount of dynamically generated content on these sites, a premium is placed on fast and efficient processor usage. Bottlenecks occur when one or more processes consume practically all of the processor time. This forces threads that are ready to be executed to wait in a queue for processor time. Adding other hardware, whether memory, disks or network connections, to try to overcome a processor bottleneck will not be effective and will frequently only make matters worse.
IIS 5.0 on Windows 2000 Server scales effectively across two to four processors, and with a little additional tuning scales well on eight processors. (See the "Tips for Getting the Most Out of an 8-Processor Machine" section.) Consider the business needs of your Web sites if you're thinking of adding more processors. For example, if you host primarily static content on your server, a two-processor computer is likely to be sufficient. If you host dynamically generated content, a four-processor setup may solve your problems. However, if the workload on your site is sufficiently CPU-intensive, no single computer will be able to keep up with requests. If this is the case for your site, you should scale it out across multiple servers. If you already run your site on multiple servers, consider adding more.
You should be aware, however, that the biggest performance gains with Windows 2000 and IIS 5.0 result from resolving inadequate memory issues. Before you make any decisions about changing the number of processors on your Web servers, rule out memory problems and then monitor the following Performance Counters.
Scaling Out Across Multiple Computers
If processor problems persist, try scaling your site out across multiple computers using Network Load Balancing (NLB) or a hardware load balancer such as Microsoft's deployment and management tool, Application Center. While setting up a Web farm using one of these methods adds a layer of complexity and introduces a number of other issues, this action is likely to solve a number of your performance issues if your site is large enough. For more information about NLB, see Appendix B in the Application Center 2000 Resource Kit, "Network Load Balancing Technical Overview."
Essentially, the network is the line through which clients send requests to your server. The time it takes for those requests and responses to travel to and from your server is one of the largest limiting factors in user-perceived server performance. This request-response cycle time is called latency, and latency is almost exclusively out of your control as a Web server administrator. For example, there is little you can do about a slow router on the Internet or the physical distance between a client and your server.
On a site consisting primarily of static content, network bandwidth is the most likely source of a performance bottleneck. Even a fairly modest server can completely saturate a T3 connection (45 Mbps) or a 100 Mbps Fast Ethernet connection. You can mitigate some of these issues by tuning the connection you have to the network and maximizing your effective bandwidth as best you can.
The simplest way to measure effective bandwidth is to determine the rate at which your server sends and receives data. There are a number of Performance Counters that measure data transmission in many components of your server. These include counters on the Web, FTP, and STMP services, the TCP object, the IP object, and the Network Interface object. Each of these reflects different Open System Interconnectivity (OSI) layers. For a detailed list of these counters and their analysis, see the Internet Information Services 5.0 Resource Guide, released with the Windows 2000 Server Resource Kit. In particular, see the Network I/O section of the "Monitoring and Tuning Your Server" chapter. To start, however, use the following counters:
See the "Tuning and Troubleshooting Suggestions" section later in this appendix for suggestions on how to reduce bandwidth usage by reducing file sizes and by enabling proxy and client caching.
Since IIS 5.0 writes logs to disk, there is regular disk activity even with 100 percent client cache hits. Generally speaking, if there is high disk read or write activity other than logging, other areas of your system need to be tuned. For example, hard page faults cause large amounts of disk activity, but they are indicative of insufficient RAM.
Accessing memory is faster than disk seeks by a factor of roughly 1 million (nanoseconds versus milliseconds); clearly, searching the hard disk to fill requests will degrade performance. The type of site you host can have a significant impact on the frequency of disk seeks. If your site has a very large file set that is accessed randomly, if the files on your site tend to be very large, or if you have a very small amount of RAM, then IIS is unable to maintain copies of the files in RAM for faster access.
Typically, you will use the Physical Disk counters to watch for spikes in the number of disk reads when your server is busy. If you have enough RAM, most connections will result in cache hits unless you have a database stored on the same server, and clients are making dissimilar queries. This situation precludes caching. Be aware that logging can also cause disk bottlenecks. If there are no obvious disk-intensive issues on your server but you see lots of disk activity anyway, you should immediately check the amount of RAM on your server to make sure you have enough memory.
To determine the frequency of disk access, log the following counters:
Balancing performance with users' concerns about the security of your Web applications is one of the most important issues you will face, particularly if you run an e-commerce Web site. Since secure Web communication requires more resources than nonsecure Web communications, it is important that you know when to use various security techniques, such as the SSL protocol or IP address checking, and when not to use them. For example, your home page or a search results page most likely doesn't need to be run through SSL. However, when a user goes to a checkout or purchase page, you will want to make sure that page is secure.
If you do use SSL, be aware that establishing the initial connection is five times as expensive as reconnecting by using security information in the SSL session cache. The default timeout for the SSL session cache has been changed from two minutes in Microsoft Windows NT 4.0 to five minutes in Windows 2000. Once this data is flushed, the client and server must establish a completely new connection. If you plan on supporting long SSL sessions, consider lengthening this timeout with the ServerCacheTime registry setting (described in the "Performance Settings" section). If you expect thousands of users to connect to your site by using SSL, a safer approach is to estimate how long you expect SSL sessions to last, and then set the ServerCacheTime parameter to slightly longer than your estimate. Do not set the timeout much longer than this or your server may leave stale data in the cache. Also, make sure that HTTP Keep-Alives are enabled (on by default). SSL sessions do not expire when used in conjunction with HTTP Keep-Alives unless the browser explicitly closes the connection.
In addition to all security techniques having performance costs, Windows 2000 and IIS 5.0 security services are integrated into a number of operating system services. This means that you can't monitor security features separately from other aspects of those services. Instead, the most common way to measure security overhead is to run tests comparing server performance with and without a security feature. The tests should be run with fixed workloads and a fixed server configuration so that the security feature is the only variable. During the tests, you probably want to measure the following:
If a server is running IIS 5.0 and working as a domain controller, the proportion of processor use, memory, and network and disk activity consumed by domain services is likely to increase the load on these resources significantly. The increased activity can be enough to prevent IIS 5.0 services from running efficiently. For enhanced performance and for security reasons, it is highly recommended that you refrain from running a high-traffic Web server on a domain controller. For more information, see the Domain Controller topic in you Windows online documentation.
Upgrading a poorly written application to one that is well designed and has been thoroughly tested can improve performance dramatically (sometimes as much as thirtyfold). Keep in mind, however, that your Web applications may be affected by back-end latencies (for example, legacy systems such as AS/400). Remote data sources may cause performance problems for any number of reasons. If developers design applications to get data from another Web site and that Web site crashes, it can cause a bottleneck on your server. If applications are accessing a remote Microsoft SQL Server database, the database may have problems keeping up with requests sent to it. While you may be the administrator of your site's SQL database, it can be difficult to monitor these servers if they are remotely located. Worse, you may have no control over the database servers, or other back-end servers. If you can, monitor the back-end servers that work with your applications and keep them as well tuned as you do your Web server.
To determine if your Web applications are creating a bottleneck on your server, monitor the following performance counters:
NOTE
These counters are the sum of the ASP performance counters for each process hosting ASP, and there is no way to break them down by process.
NOTE
ASP is an ISAPI Extension and is included by the second counter.
IIS 5.0 is typically very efficient at serving static HTML pages with the out-of-the-box settings. If your site hosts primarily static content, many performance problems can be hardware related. IIS 5.0 offers improved performance for Web applications, but some additional tuning may be required to optimize performance. Of course, issues about best practices in Web application design and coding remain, regardless of improvements in server software. While this appendix does not attempt to discuss the intricacies of tuning your Web applications, this section does provide some pointers and recommendations to make them perform faster. Consider the following suggestions while planning and testing your Web applications before running them on your production servers.
First of all, ISAPI applications run faster than Active Server Pages (ASP) applications, although there is much less developer overhead for ASP. Both of these types of applications run faster than equivalent CGI applications.
Second, you should use static files wherever you can because they don't have the processing load or cause the disk activity that dynamic files do. In conjunction with this, your applications should push as much of the processing load onto the client as possible in order to avoid network latencies. This also saves on server-side resources and allows changes to appear instantaneously to the user. A common example is adding client-side code to validate that forms have been filled out with good data, such as checking that email addresses are well formed or credit card numbers have a correct checksum.
Another tactic is to make sure that debugging for ASP is turned off on your production servers. If debugging is enabled, you will need to set the AppAllowDebugging metabase property to FALSE. For more information, see the "Performance Settings" section.
Set Expires headers for all images and for HTML wherever possible to allow both to be stored in the client's cache. For more information, see the "Tuning and Troubleshooting Suggestions" section later in this appendix.
Do not store apartment-threaded components in ASP Application and Session state. This includes all Microsoft Visual Basic components, but not Java or most C++ objects.
Use Secure Sockets Layer (SSL) only when necessary. Using the HTTPS protocol is much more expensive than standard HTTP. Be sure that the information being sent, such as credit card numbers or medical information, is sensitive enough to warrant the added expense. For more information about security tuning issues, see the "Security" section earlier in this appendix.
Process isolation will affect Web application performance as well. IIS 5.0 Web applications run in the out-of-process pool (medium protection) by default. It is safer to take the performance impact of process isolation than to risk server downtime and data loss that can be caused by a low-isolation application crashing the Inetinfo process. For a more in-depth discussion of this topic, see the "Process Isolation" section later in this appendix.
To enhance database-driven performance in a production environment, use Microsoft SQL Server 2000. Because both IIS and SQL Server perform best with plenty of memory, try storing the database on a separate server from the Web service. In this situation, communication across computer boundaries is frequently faster than communication on a single computer. Performance degradation due to a lack of memory and insufficient cycles often occurs when both SQL Server and IIS reside on the same server. Also, be sure to create and maintain good indexes. This will minimize I/O on your database queries. Last but not least, take advantage of stored procedures. They take much less time to execute and are easier to write than an ASP script designed to do the same task.
As a rule of thumb, if you have an ASP script that's more than 100 lines long (counting lines of code in files brought in using the #include directive), consider creating a COM+ component to provide the same function. If written efficiently and debugged properly, COM+ components can offer 20 to 30 times the processing speed of a script for the same dynamic page. The easiest way to measure the size of an ASP script with #includes is to change the file extension of the page from .asp to .stm and open the .stm file with your browser. Use your browser's View Source command to display the .asp file and lines of code from the included files.
For you to maximize performance for your dynamic Web applications, it is very important to stress test your applications before you make them live on your site. If your production Web server is a multiprocessor system, it is important to stress test on a multiprocessor system. This will help identify multiprocessor scaling problems and race conditions in your script and components. A very good tool for doing this is the Web Application Stress (WAS) tool, which can be downloaded from the Microsoft Web Application Stress Tool site (http://webtool.rte.microsoft.com/). Included at this site are a tutorial and a knowledge base dedicated to the tool. WAS is also included on the Windows 2000 Resource Kit companion CD.
For more information about tools for measuring the performance of your Web servers and applications, see the "Tools to Monitor and Test Server Performance" section. For a list of links and references about Web application performance and the tools to test that performance, see the "Resources" section at the end of this appendix.
To support your performance tuning and testing needs, Microsoft offers a number of tools: some included with Windows 2000 and IIS 5.0, others offered on the Windows 2000 Resource Kit CD, and still others downloadable from the Microsoft Web site. The System Monitor (formerly known as PerfMon) is built in to Windows 2000 and is essential to monitoring nearly every aspect of server performance. Process and Thread Status (pstat.exe) shows the status of all running processes and threads. Process Tree (ptree.exe) allows you to query the process inheritance tree and kill processes on local or remote computers. These tools are available on the Windows 2000 Server Resource Kit companion CD. The HTTP Monitoring Tool, available on the companion CD to the Windows 2000 Resource Kit, monitors HTTP activity on your servers and can notify you if there are changes in the amount of activity. Network Monitor is a Windows 2000 administrative tool you can use to keep tabs on network traffic. It is not installed by default, but you can install it by using the Add/Remove Programs feature of the Control Panel.
NOTE
A lightweight version of Network Monitor comes with Windows 2000 Server; the full-featured version is available with Microsoft Systems Management Server. NetStat is a command line tool that detects information about your server's current network connections.
At the center of these tools are the Performance Counters that are built into IIS 5.0 and the Windows 2000 operating system. Developers can also include custom Performance Counters in the ISAPI DLLs or COM components that they write. These counters can be read directly by a number of the tools mentioned above, including System Monitor, the Web Application Stress Tool, and WCAT. A number of these counters have been mentioned throughout this document; it is important to know which are pertinent to your monitoring and testing requirements.
System Monitor is the single most important tool to establish a baseline of performance on your Web server and monitor the effects on performance of any changes you make to software or hardware. System Monitor provides a UI that allows you to see performance counter readings whether you are monitoring or logging them. It also allows you to graphically log counter activity and set alerts that will appear in Event Viewer. System Monitor provides documentation for each counter in your system.
The Web Application Stress tool is designed specifically to simulate multiple browsers requesting pages from a Web site. You can use this tool to gather information about the performance and stability of your Web applications and about how your servers are performing. This tool simulates a large number of requests with a relatively small number of client machines. The goal is to create an environment as similar to a production environment as possible. This allows you to find and eliminate problems in your Web server and applications prior to deploying them on your production servers.
For more information on any of these tools, see the online IIS 5.0 documentation included in the Windows 2000 Resource Kit. Links to other sources of information are included in the "Resources" section.
If you currently have a well-tuned Web site running on Windows NT Server 4.0 with IIS 4.0, that site should perform well on Windows 2000 Server and IIS 5.0. For more information, see the white paper, "Windows 2000 Performance: An Overview," which is located at http://www.microsoft.com/windows2000/guide/platform/performance/overview.asp. You will want to monitor your server and site as you make the transition. You should be aware of some new features in Windows 2000 and IIS 5.0 that are designed for better performance and ease of administration. In addition, there are some changes in default settings in IIS 4.0 to IIS 5.0. This section discusses these features and changes.
Setting Windows 2000 as an Application Server
If you plan to use your server primarily as a Web server, setting up your server computer as an application server is a quick way to improve performance. This allows you to take advantage of better SMP scalability, improved networking performance, and support for more physical memory for your Web applications. In addition, you can use the transaction-processing capabilities of COM+ as a transaction monitor to improve performance of database applications. Windows 2000 Server installs as a file server by default, so you should make sure to select the application server during the installation process. If you don't, however, it is easy to configure your server as an application server after installation. To do so:
This configuration will not take effect until you reboot the server.
IISReset Utility
IIS 5.0 offers a number of new features and default settings to help make Web sites that run on it more reliable and easier to administer. The first of these is the new IISReset.exe, a utility that allows you to stop and restart IIS services without rebooting your computer. By default, IISReset will restart your services if they fail. You can also use IISReset to remotely start, stop, or pause your services, and to reboot your server computer if necessary. You should reboot only as a last resort. If you restart your Web service with IISReset, users will experience a small pause, during which they have only to hit refresh to get a new page. If the entire computer is rebooted, the unavailability is longer. You can also isolate the services that you stop. For example, if you are running an SMTP server on the same computer as your Web server, you can choose to simply stop and restart your Web service rather than taking down the SMTP services as well.
You should be aware that frequent reboots and resets will compromise the integrity of your performance data. If you are using IISReset to automatically restart services, it may mask this problem, so you should always monitor the Event Log for restarts.
IIS Settings
The AspProcessorThreadMax metabase property has changed. Formerly called ProcessorThreadMax and stored in the registry in IIS 4.0, its default value was 10. The new default value in IIS 5.0 is 25. This setting is per processor and per process. On a dual-processor system, the number of worker threads in each process can be up to twice as high as the AspProcessorThreadMax value, or up to 50 worker threads (with the default settings). If you are running several high-isolation ASP applications, each process will have an independent set of worker threads.
NOTE
ASP starts out with a number of worker threads that is equal to the number of processors plus seven. It creates more threads when the size of the ASP request queue passes certain thresholds.
The AspThreadGateEnabled property has been added to the metabase. It is off by default. If you turn this property on, IIS performs thread gating, which dynamically controls the number of concurrently executing threads in response to varying load conditions. When processor utilization drops below 50 percent, which could indicate that threads are blocked (for example, while waiting for an external database to return the results of a query) or simply that the load is light, IIS 5.0 increases the number of active threads so that other requests can be serviced in a timely manner. When processor utilization exceeds 80 percent, indicating a heavy load, IIS 5.0 deactivates threads to reduce the amount of context switching. Both lower and upper limits can be set: AspThreadGateLoadLow defaults to 50 percent, while AspThreadGateLoadHigh defaults to 80 percent. Regardless of the value of AspThreadGateEnabled, an ASP process will never have more worker threads than the number of processors multiplied by AspProcessorThreadMax.
For sites that do a lot of ASP processing, it is best to test performance with thread gating turned on and with it turned off to see what the effects are. Make your final decision based on your observations. For sites that are made up primarily of static files, turn the setting on and monitor server performance to see if throughput and response time are improved.
IIS 5.0 has also changed the default behavior of the ASP Template Cache. In IIS 4.0, the ASP Template Cache limit defaulted to –1. With this setting, this cache could grow arbitrarily large. On Web sites with lots of ASP content, the ASP Template Cache tended to fill all of the RAM in the server. In contrast, the IIS 5.0 default limit is 250 files. Because each site has its own requirements, you should reset the limit to meet your site's particular needs. Perhaps the easiest way to accomplish this is to monitor performance as you increase and decrease the value. Because an entry in this cache can be pointed to by one or more entries in the ASP Script Engine Cache, and because best performance occurs if the scripts in ASP pages are found in the ASP Script Engine Cache, you should never set the limit on the ASP Template Cache to zero. Doing so prevents any hits on the ASP Script Engine Cache, because the ASP Script Engine Cache entry for a particular .asp file can only be referenced through its template. Thus, if no templates are cached, the ASP Script Engine Cache is rendered useless. ASP Script Engine Cache hits provide better performance than hits on the ASP Template Cache, so if you make ASP Script Engine Cache hits impossible, performance suffers badly unless all your pages are static. From IIS 4.0 to IIS 5.0, the ASP Script Engine Cache limit has been upped from 30 to 125 files. To determine if you need to change your cache settings, you should keep an eye on response times, the number of ASP requests in the queue, the number of context switches, and the amount of CPU utilization.
NOTE
The ASP Script Engine Cache setting should be at least equal to one more than the number of CPUs on your server, multiplied by the AspProcessorThreadMax setting. It is probably too small for most 8 processor systems.
Also, you should consider adjusting the default settings for the IIS File Cache. You can add these settings to the registry to modify IIS 5.0 default behavior. The first setting you should consider adding is the MemCacheSize object; if this is not present in the registry, the default behavior is to allow the cache to grow to a maximum of half the available physical memory. This ensures that IIS interacts well with other applications on machines that are not dedicated Web servers. Try upping this limit (specified in MB) and monitoring performance to see if there are any gains. The second registry object you should consider adding is MaxCachedFileSize. The IIS default behavior is to allow a maximum file size of 256 KB in the cache. If you have a site that has several large JPEG files that are accessed regularly, you may want to experiment with bumping this limit up to determine if caching files larger than 256 KB will work for your site. But be aware that if file sizes are around 200 to 300 KB, you'll reach a point of diminishing returns when caching them. For smaller files, the overhead of reading from disk rather than the IIS File Cache is significant. For larger files, you won't get much performance improvement; you're more likely to just waste memory. IIS regularly purges from the cache files that have not been requested recently (within the last 30 seconds, by default). The threshold is determined by the ObjectCacheTTL (TTL stands for Time To Live) registry setting; by default this is not present in the registry. If you have plenty of memory, it may be effective to adjust this TTL upwards.
For a discussion of how IIS and ASP use caches to process incoming requests, see the "ASP Caching" section later in this appendix.
Process Isolation
IIS 4.0 introduced the concept of running Web applications out of process. This feature created greater stability for Web servers, but at a significant performance cost. In IIS 5.0, the performance of out-of-process applications has improved, especially for ASP. Some performance degradation remains, however, in comparison with IIS 5.0 in-process applications. In addition to improved performance, the concept of running applications out of process has been expanded. You can now run Web applications in a pooled out-of-process environment.
Applications that run in the Web services process (Inetinfo.exe) result in higher performance, but there is a greater risk that a misbehaving application can make the Web services become unavailable. The recommended configuration is to run Inetinfo.exe in its own process, run mission-critical applications in their own processes (high protection), and run remaining applications in a shared, pooled process (medium protection). For the best performance and reliability, run ASP applications in medium protection and configure any COM+ components as library applications, not server applications.
If you decide to run your application as a separate process, or with other applications in a single pooled process, you will need to select High (Isolated) or Medium (Pooled) from the Application Protection drop-down list on the Home Directory or Virtual Directory property sheet. You should first create an application directory and designate it as either a Home Directory or Virtual Directory, if you haven't already done so. By default, all new applications are run in medium protection. You can run a very large number of applications at medium isolation, but you will only be able to run a few dozen applications at high isolation, because each process consumes significant resources.
For more information about these registry settings and metabase properties, see the "Performance Settings" section. For more information about the features mentioned in this section, see the IIS 5.0 and Windows 2000 online documentation.
If you determine that you need to address specific hardware-driven performance issues, consider using the following suggestions.
Each IIS 5.0 service (FTP, Web, etc.) has a connection queue, which is set to 15 entries. If this number, under load, does not meet your needs, you can increase it by adding a ListenBackLog parameter to the registry and setting the value to the maximum number of connection requests you want the server to maintain. For more information, see the "Performance Settings" section.
HTTP Keep-Alives maintain a client's connection to the server even after the initial request is complete. This feature reduces latency, reduces CPU processing, and optimizes bandwidth. HTTP Keep-Alives are enabled by default. To reset them if they have been disabled, select a site in the Internet Services Manager, open the site's Properties sheet, click the Performance tab, and select the HTTP Keep-Alives check box.
Process accounting and process throttling work for both CGI (Common Gateway Interface) applications and for applications that are run out of process. You cannot activate accounting for in-process applications or for applications run in the new IIS 5.0 out-of-process pool (medium protection).
To turn on process accounting
The first two steps set the Web site to run out of process and the last two steps activate process accounting for that site.
For example, if you are an ISP and one of your customer sites is using more than its share of CPU time, you can activate process accounting and extend logging so that figures for the Job Object counters are recorded. With the information gathered from process accounting, you can then decide whether to upgrade the servers in your installation, to adjust the billing for this particular customer, or to limit the amount of resources the site can consume.
After determining the amount of resources the customer's site is consuming, you might want to limit that customer to a certain percentage of your available resources. This will free up resources for other customers. To limit a site's resources, run the site's applications out of process, and then turn on process throttling as follows:
When the site reaches a predefined limit, it will take the defined action, such as reducing process priority, halting processes, or halting the site. Be aware that the site may actually exceed the apparent processor use limit if virtual directories within a throttled site are configured as in-process or pooled-process applications. In-process and pooled-process applications are not affected by processor throttling and are not included in process accounting statistics.
You should keep in mind that process throttling could sometimes backfire. Because the throttled Dllhost process is running at a lower priority, it won't respond quickly to requests from the Inetinfo process. This can tie up several I/O threads, harming your server's overall responsiveness. As always after making any kind of change, monitor your server closely once you have set up process throttling to see what effects it has on performance.