Availability is a measure (from 0 to 100 percent) of the fault tolerance of a computer and its programs. The goal of a highly available computer is to run 24 hours a day, 7 days a week, which means that applications and services are operational and usable by clients most of the time.
Failure is defined as a departure from expected behavior on an individual computer system or a network system of associated computers and applications. Failures can include behavior that simply moves outside of defined performance parameters.
Fault tolerance is the ability of a system to continue functioning when part of the system fails. Fault tolerance combats problems such as disk failures, power outages, or corrupted operating systems, which can affect startup files, the operating system itself, or system files. Windows 2000 Server includes features that support certain types of fault tolerance.
Manageability is the ability to make changes to the system easily. Management has many facets, but it can be loosely divided into the following disciplines: change and configuration management, security management, performance management, problem management, event management, batch/output management, and storage management.
Reliability is a measure of the time that elapses between failures in a system. Hardware and software components have different failure characteristics. Although formulas based on historical data exist to predict hardware reliability, it’s difficult to find formulas for predicting software reliability.
Scalability is a measure of how well a computer, service, or application can expand to meet increasing performance demands. For server clusters, scalability refers to the ability to incrementally add one or more systems to an existing cluster when the cluster’s overall load exceeds its capabilities.
Clients, which issue service requests to the server hosting the application that the client is accessing; front-end systems, which consist of the collections of servers that provide core services, such as HTTP/HTTPS and FTP, to the clients; and back-end systems, which are the servers hosting the data stores that are used by the front-end systems.
MTTF (mean time to failure) is the mean time it will take for a device to fail, and MTTR (mean time to recovery) is the mean time it takes the device to recover from a failure. Downtime is determined by the ratio MTTR/MTTF.
Software failures, hardware failures, network failures, operational failures, and environmental failures can cause system outages.
Develop operational procedures that are well documented and appropriate for your goals and your staff’s capabilities.
Ensure that your site has enough capacity to handle processing loads.
Reduce the probability of failure.
Create a robust architecture based on redundant, load-balanced servers. (Note, however, that load-balanced clusters are different from Windows application clusters. Commerce Server 2000 components, such as List Manager and Direct Mailer, are not cluster aware.)
Review code to avoid potential buffer overflows, infinite loops, code crashes, and openings for security attacks.
The network topology should look similar to the design shown in the following illustration:
Your design should be similar to the one in the following illustration:
Notice that the design shown in the answer places the application clusters in a middle tier. Some topologies include only a front-end tier and a back-end tier.
You will need to create four network segments:
Your design should be similar to the one in the following illustration:
Notice that the management network is connected to each cluster and is on its own subnet. Also notice that in this topology the application cluster is connected to three different network segments: the middle tier (10.10.1.0), the back-end tier (10.10.2.0), and the management tier (10.10.4.0).
The Web clusters connect to the front end network, which is connected to the Internet; the middle tier (10.10.1.0); and the management network (10.10.4.0).
To simplify name resolution for internal clients, use a different domain name for your internal and external namespaces. You can use the same name internally and externally, but doing so causes configuration problems and generally increases administrative overhead. If you want to use the same domain name internally and externally, you need to perform one of the following actions:
Your design should be similar to the one in the following illustration:
Notice that the management server is now labeled with an FQDN: mgmt.contoso-pvt.com.
The FQDN for the Internet side of the server would be web1.contoso.com, and the FQDN for the private side of the server would be web1.contoso-pvt.com.
The FQDN for the server would be app1.contoso-pvt.com.
Many users have been complaining that your site is often unavailable. You plan to modify the network topology to increase availability. What’s the first step you should take?
Add a redundant connection to the Internet.
You should add a management subnet (such as 10.10.4.0) that connects to each cluster.
Use the 80/20 rule to divide scope addresses between the DHCP servers. The primary server should receive about 80 percent of the available addresses, and the backup server should receive about 20 percent.
You must perform one of the following actions:
You should label the diagram in a way similar to that shown in the following illustration:
Notice that six 10-GB physical disks (60 GB) are used to store data, but the logical disks support only 40 GB of storage.
In a RAID-1 configuration the same data is written to each of the two disks. As a result, disk space usage is only 50 percent of the total for both disks. RAID-5 uses the equivalent of one physical disk to support its fault-tolerant configuration. In this case, one disk equals 10 GB, so 10 GB are used for parity information, leaving 30 GB for storage.
You should label the diagram in a way similar to that shown in the following illustration:
Notice that two 10-GB disks are used to support the RAID-1 configuration, but only 10 GB of storage are available on the logical disk.
In a RAID-1 configuration the same data is written to each of the two disks. As a result, disk space usage is only 50 percent of the total for both disks.
You should configure the data storage system in a way similar to that shown in the following illustration:
You should configure the data storage system in a way similar to that shown in the following illustration:
50 GB
You should configure each server with the following redundant components:
The servers’ environment should be checked to make certain that the room temperature is about 70º F (21º C), that a proper amount of humidity is maintained, and that the computers and the computer room are kept clean.
Storage area network (SAN)
Recommend the software implementation of RAID-5 that’s available in Windows 2000 Server.
For each of the following steps, identify how your file server resource group will be configured:
No server-based applications are running on these servers. However, examples of server-based applications are Microsoft SQL Server 2000 and Microsoft Exchange Server.
No server-based applications are running on these servers.
File Share, IP Address, Network Name, and Physical Disk
The File Share resource type depends on the Network Name resource type, and the Network Name resource type depends on the IP Address resource type. The File Share resource type also depends on the Physical Disk resource type.
You should create only one resource group because a resource and its dependencies must be together in a single group. In addition, a resource can’t span groups.
You should create only one resource group. The dependency should look similar to the one shown in the following illustration:
With a single-node configuration, you can organize resources for administrative convenience, use virtual servers, restart applications automatically, and more easily create a cluster later. However, this model can’t make use of failover. If an application can’t be restarted, it becomes unavailable.
An active/passive configuration provides the maximum availability for your resources. However, this model also requires an investment in hardware that’s not used most of the time. If the primary node fails, the secondary node immediately picks up all operations. This model is best suited for those applications and resources that must maintain the highest availability.
An active/active configuration provides high availability and performance when both nodes are online and provides reliable and acceptable performance when only one node is online. Services remain available during and after failover, but performance can decrease, which can affect availability.
An active/active configuration best suits the needs of Wingtip Toys because this configuration allows maximum use of hardware resources while providing highly available services. Because performance degradation after failover isn’t an overriding concern, an active/passive configuration isn’t necessary.
Listing the Server-Based Applications
SQL Server 2000 and Exchange 2000 Server
Sorting the List of Applications
Both SQL Server 2000 and Exchange 2000 Server can use failover and you should set up both to use it.
Listing Other Resources
You should include the following resources: Physical Disk, Network Name, and IP Address.
Listing Dependencies
Each service (mail and database) is dependent on the Physical Disk resource and the Network Name resource. Each Network Name resource is dependent on the IP Address resource.
Making Preliminary Grouping Decisions
You should create two groups: one for the database service and its related resource types (Physical Disk, Network Name, and IP Address), and one for the mail service and its related resource types (Physical Disk, Network Name, and IP Address).
This grouping strategy allows the mail service to run on one node and the database service to run on another node, which supports an active/active configuration.
Making Final Grouping Assignments
In the database resource group, the database resource is dependent on the Physical Disk resource type and the Network Name resource type, and the Network Name resource type is dependent on the IP Address Resource type. The mail resource group has the same dependencies.
For each group, set the Cluster service to restart the group before failover occurs.
Configure each group so it always runs on a designated node whenever the node is available. You should configure the database group so that one of the servers is set as the preferred node, and you should configure the mail group so that the other server is set as the preferred node.
Configure each group to failback to its preferred node as soon as the Cluster service detects that the failed node has been restored.
Server cluster networks, network interfaces, nodes, resource groups, and resources
Resource groups are logical collections of resources. Typically, a resource group is made up of logically related resources such as applications and their associated peripherals and data. A resource is any physical or logical component that can be brought online and taken offline, be managed in a server cluster, and be hosted (owned) by only one node at a time.
You should list the dependencies for each resource. The list should include all resources that support the core resource.
You should use the active/active model because it supports the maximum use of hardware by placing resource groups on separate nodes. When the cluster is fully operational, the cluster provides high availability and performance.
The network should be configured in a way similar to the configuration shown in the following illustration:
When you have more than one cluster, you can use network switches to separate incoming traffic. However, if you use network switches and you deploy two or more clusters, consider placing the clusters on individual switches so that incoming cluster traffic is handled separately.
In general, NLB can scale any application or service that uses TCP/IP as its network protocol and is associated with a specific TCP or UDP port. In addition, the application must be designed to allow multiple instances to run simultaneously, one on each cluster host. You shouldn’t use NLB to directly scale applications that independently update inter-client state data because updates made on one cluster host won’t be visible to other cluster hosts.
IIS, because it uses TCP/IP as its network protocol and uses Port 80. In addition, IIS allows multiple instances to run simultaneously on different hosts.
You shouldn’t run SQL Server and Exchange Server on the NLB cluster because these applications independently update inter-client state data. You should use the Cluster service to create clusters for these two applications.
Unicast mode is the default configuration for NLB and works with all routers. However, ordinary network communication among hosts isn’t possible, and network performance may be compromised.
Unicast mode is the default configuration for NLB and works with all routers. In addition, ordinary network communication among hosts is possible, and network performance may be enhanced. However, at least two network adapters are required.
Only one network adapter is required, and ordinary network communication among hosts is possible. However, this isn’t the default configuration, network performance may suffer, and some routers may not support the use of a multicast MAC address.
Performance may be enhanced, and ordinary network communication among hosts is possible. However, this isn’t the default configuration, at least two network adapters are required, and some routers may not support the use of a multicast MAC address.
Each host in the NLB cluster should be configured with multiple network adapters, and the cluster should run in unicast mode. This model is easier to configure because it’s the default mode, permits ordinary network communication among hosts, and works with all routers. The fact that at least two network adapters are required is not a problem because the hosts are part of a multitiered structure that requires at least two network adapters in each computer.
NLB scales the performance of a server-based program, such as a Web server, by distributing its client requests among multiple servers within the cluster. With NLB, each host receives each incoming IP packet but only the intended recipient accepts it. The cluster hosts concurrently respond to different client requests or to multiple requests from the same client. For example, a Web browser may obtain the various images within a single Web page from different hosts in a load-balanced cluster. This speeds up processing and shortens the response time to clients.
With Single affinity, NLB pins a client to a particular host without setting a timeout limit; this mapping is in effect until the cluster set changes. The trouble with Single affinity is that in a large site with multiple proxy servers a client can appear to come from different IP addresses. To address this issue, NLB also includes Class C affinity, which specifies that all clients within a given Class C address space will map to a given cluster host. However, Class C affinity doesn’t address situations in which proxy servers are placed across Class C address spaces. Currently the only solution is to handle it at the ASP level.
When its client affinity parameter setting is enabled, NLB directs all TCP connections from one client IP address to the same cluster host. This allows session state to be maintained in host memory. However, should a server or network failure occur during a client session, a new logon may be required to reauthenticate the client and reestablish session state.
What other decision must you make?
You must choose an NLB configuration model.
Which NLB configuration model should you use?
You should use a single network adapter in multicast mode. If the router doesn’t accept an ARP response from the cluster, you should add a static ARP entry to the router for each virtual IP address.
The diagram should look similar to the following illustration:
General/Web cluster, COM+ application cluster, and COM+ routing cluster
General/Web cluster
Your design should look similar to the one in the following illustration:
Requests will be routed to the hosts through the use of NLB, which will be configured on each computer.
COM+ applications would reside on the General/Web cluster.
Your design should look similar to the one in the following illustration:
You would configure NLB on the General/Web cluster and the COM+ routing cluster.
The primary role of the COM+ routing cluster is to route requests to a COM+ application cluster.
Your design should look similar to the one in the following illustration:
You should configure CLB on the COM+ routing cluster and configure NLB on the General/Web cluster and the COM+ routing cluster.
Calls over the network yield slower throughput than calls to software installed on the same computer. This is true in all software communication, whether it’s through Microsoft software or something else. For this reason, CLB isn’t an effective solution where throughput is absolutely critical. In this case it’s better to install the COM+ components locally on the Web-tier cluster members, thus avoiding cross-network calls. CLB support is lost, but load balancing is still available through NLB.
NLB in Application Center is carried out by NLB in Windows 2000 Advanced Server or Datacenter Server. Application Center provides an interface that’s integrated with NLB. The Application Center user interface serves to make load-balancing configurations for a cluster easier by removing much of the configuration detail and by reducing the number of user decision points.
You should consider using CLB in the following scenarios:
Single-node clusters, standard Web clusters, and COM+ applications clusters.
Which cluster type and load balancing configuration should you use?
You should use the General/Web cluster type and use NLB for your load-balancing configuration. The General/Web cluster type is used to host Web sites. NLB is recommended over other load balancing because it’s inexpensive to implement and requires less administration.
Which type or types of Application Center clusters (General/Web, COM+ routing, or COM+ application) should you implement in this site?
You should implement a General/Web cluster but not a COM+ routing cluster or COM+ application cluster. Using a separate tier for the COM+ applications will result in a degradation in throughput performance, in administrative complexity, and in difficulty in making full use of the hardware.
You should use the following calculation:
.9247 × 3 × 400 = 1109.64
You should use the following calculation:
1109.64 ÷ 18.21 × 2 = 121.87
You should use the following calculation:
121.87 × 0.00139 = 0.1694 MC
You should use the following calculation:
0.003804 × 4.297 = 0.01635
You should use the following calculation:
0.003804 × 119.36 = 0.45405
You should use the following calculation:
0.01635 + 0.45405 = 0.4704 KBps
The network should support 6,000 concurrent users.
You must first calculate the CPU usage for the Default operation by using the following calculation:
.9615 × 3 × 400 = 1153.8
Once you’ve determined the CPU usage for the Default operation, you should calculate the cost for that operation by using the following calculation:
1153.8 ÷ 96.98 × 1 = 11.897
The cost for each operation is as follows:
Default: .9615 × 3 × 400 ÷ 96.98 × 1 = 11.897
Add Item: .9208 × 3 × 400 ÷ 26.21 × 3 = 126.474
Listing: .9342 × 3 × 400 ÷ 29.29 × 2 = 76.548
Lookup: .9899 × 3 × 400 ÷ 82.08 × 2 = 28.944
The cost per user for each operation is as follows:
Default: 11.897 × 0.00128 = 0.01523
Add Item: 126.474 × 0.00102 = 0.12900
Listing: 76.548 × 0.00329 = 0.25184
Lookup: 28.944 × 0.00121 = 0.03502
The total cost per user for CPU usage is as follows:
0.01523 + 0.12900 + 0.25184 + 0.03502 = 0.43109
The network cost of the Default operation is as follows:
(0.003682 × 1.845) + (0.003682 × 0) = 0.006793 KBps
The network costs of the operations are as follows:
Add Item: (0.000254 × 4.978) + (0.000254 × 127.756) = 0.033714 KBps
Listing: (0.000523 × 26.765) + (0.000523 × 24.123) = 0.026614 KBps
Lookup: (0.001134 × 25.678) + (0.001134 × 25.564) = 0.058108 KBps
The network costs per user are as follows:
0.006793 + 0.033714 + 0.026614 + 0.058108 = 0.125229 KBps
Each Web server is configured with three 400 MHz processors, giving each machine 1,200 MHz of processing power. However, the upper bound on each computer is 755 MHz.
The total cost per user for CPU usage is 0.43109 MC.
The CPUs in each Web server can support the following number of users:
755 ÷ 0.43109 = 1,751 users
The Web cluster should contain the following number of servers:
6,000 ÷ 1,751 = 4 servers
The network is a 100-Mbps (12.5 MBps) Ethernet network. Normally, you should not push network utilization over 36 percent, which is 4.5 MBps.
The network will support the following number of concurrent users:
4500 KBps ÷ 0.125229 KBps = 35,934 users
The maximum transmission rate is as follows:
1,536,000 ÷ 55,360 = 27.7 pages per second
For the 28.8 Kbps modem, it will take the following amount of time to download the 90-KB page:
720 kilobits ÷ 28.8 Kbps = about 25 seconds
For the 56 Kbps modem, it will take the following amount of time to download the 90-KB page:
720 kilobits ÷ 56 Kbps = about 13 seconds
The disk cost for the Add Item operation is as follows:
4.395 × 0.012345 = 0.054256 KBps
You should determine your hardware needs and your network bandwidth. You should also plan the site topology to take into consideration the capacity requirements. In addition, you should find potential bottlenecks and plan for future upgrades to the site.
You should create a site for each LAN or set of LANs connected by high-speed links, any perimeter networks separated from other network segments by firewalls, and any location reachable only by SMTP.
You should create two sites: one for the perimeter network and one for the private corporate network.
The private corporate network can be all one site because it’s one LAN that has fast and reliable connections. However, the perimeter network should be a separate site because it’s connected to the corporate network through a firewall. A separate site for the perimeter network allows you to limit client authentication to domain controllers within that site, assuming the domain controllers are fault tolerant.
You should create a site for each location that’s connected by a WAN link because WAN links are traditionally slower and less reliable. Generally, a site shouldn’t span across a WAN connection.
You should place at least one domain controller in each site and two domain controllers in the domain. Place additional domain controllers in a site when a large number of clients access the site; when intersite connections are relatively slow, unreliable, or near capacity; or when clients should be authenticated at a specific set of domain controllers.
You should place at least one domain controller in the perimeter network site and one in the private network site.
You should place at least two domain controllers in each site to provide fault tolerance for the Active Directory services. That way authentication requests never have to pass through the firewall.
You should configure your site links according to available bandwidth, network usage patterns, and type of transport—and if appropriate, configure additional site links to provide redundant replication paths.
You need to configure only one site link to connect the two sites.
You must provide the replication schedule, replication interval, replication transport, and link cost.
You should configure the replication schedule to permit replication at all times on all days of the week, because you want replication to occur every day at regular intervals throughout the day. You should configure the replication interval at two hours, which would equal 12 times a day. You should configure the transport type as IP, which is implied by the nature of the network and the connection through the firewall. Because you need to configure only one site link, you don’t have to be concerned with configuring the link cost. Link cost is the relative bandwidth of the connection as compared to other site links.
You should locate at least one global catalog in each site. Place additional global catalog servers in a site when a large number of clients access the site or when intersite connections are relatively slow, unreliable, or near capacity.
You should configure all four domain controllers as global catalog servers. This provides fault tolerance within each site should a domain controller fail. If one does fail, the authentication process won’t have to look outside the site (and through the firewall) for a copy of the global catalog. In addition, by configuring all domain controllers as global catalog servers, you don’t have to be concerned about locating the infrastructure master on a domain controller that doesn’t host the global catalog.
You should provide a standby operations master. In large domains, place the relative identifier master and PDC emulator on separate domain controllers. Don’t assign the infrastructure master role to a domain controller that’s hosting the global catalog unless all domain controllers in the domain are global catalog servers.
You should locate the operations masters in the private network. Make one domain controller the operations master and make the other domain controller a standby operations master. You don’t have to be concerned about assigning the infrastructure master role to a domain controller that isn’t hosting the global catalog because all domain controllers in the domain will be hosting the global catalog.
Active Directory objects represent the physical entities that make up a network. For example, users, printers, and computers are Active Directory objects. The Active Directory schema defines the types of objects and the types of information about those objects that can be stored in the directory. There are two types of definitions in the schema: attributes and classes.
The logical structure is made up of domains, trees, forests, and OUs. The physical structure is made up of sites and domain controllers.
The five roles are schema master, domain naming master, relative ID master, PDC emulator, and infrastructure master.
You should create at least four sites: one for each LAN and one for the perimeter network.
Replication won’t occur between the two sites during business hours.
You should perform a single deployment from the staging computer to the Web cluster controller.
After the application has been deployed to the controller, it should be replicated from the controller to member servers. You don’t have to replicate the content manually if Application Center is configured for automatic synchronization. The replication will be automatic.
Web services shouldn’t be affected because no ISAPI filters or COM+ components are being deployed. If they were, you’d have to reset the services.
You should set up the clusters in a way similar to that shown in the following illustration:
Your application can use distributed partitioned views or data-dependent routing.
You can create a four-node multiple-instance cluster that uses an N+1 topology. In this configuration, three of the servers contain an active instance of SQL Server, one for each partition, and the fourth node remains in standby mode and is configured as the primary failover computer.
Your design should look similar to the one in the following illustration:
Your design should support the presentation layer and the business logic layer.
You can perform a single deployment from the stager to the Web cluster controller. From there, Application Center will replicate the content automatically to the other member servers.
Your design should look similar to the one in the following illustration:
To deploy the applications, you should take the following steps:
You must ensure that a guest account has been created that corresponds to the IUSR_computername account and that permissions have been granted to that account to allow it to log on to the SQL Server computers.
Change the threading model to Both, limit the connection time-out, close connections, share active connections, and increase the size of the record cache.
Your design should look similar to the one in the following illustration:
A multiserver environment has the following benefits:
Outlook Web Access supports two authentication methods: Basic and Integrated Windows. Outlook Web Access also supports SSL encryption and Anonymous access.
SSL (with Basic authentication) should be used. It provides the highest level of security and operability between clients and server because the entire communications session is encrypted.
First deploy the COM+ components on the COM+ application cluster and then deploy the rest of the application to the Web cluster.
You can use a distributed partition view or data-dependent routing to access the partitioned data.
You can use failover clustering and log shipping in conjunction with partitioning to provide a high-availability solution.
You should use the multiserver model because it supports a unified namespace and back-end isolation. This configuration also allows you to isolate processing tasks such as SSL encryption and decryption.
You should use SSL along with Basic authentication to provide the maximum security.
The IIS Read permission hasn’t been granted to the site. Although the NTFS Read permission has been granted to the IUSR_computername account, the most restrictive permissions apply to the directory, which, in this case, are the IIS permissions.
You should remove the IUSR_computername account from the DACL for the directory and add the appropriate users or groups to the DACL.
IIS verifies that the IP address, network, and domain name aren’t denied access. IIS then authenticates the user and, assuming the user is authenticated, authorizes the user. If a custom authentication application has been implemented, that application then authenticates the user. Finally, the user is authenticated by verifying the NTFS permissions set for that directory.
IIS supports five authentication models: Anonymous, Basic, Integrated Windows, Digest, and client certificate mapping. This rest of this section discusses each of these models. IIS supports the following authentication models:
You should use Basic authentication because it’s compatible with most Web browsers.
User credentials aren’t secure because they aren’t encrypted.
You can use the following methods to encrypt the data:
You should use SSL to secure the data. You can’t use IPSec because not all browsers support it, and you can’t use EFS to protect the data that’s being transferred between the clients and the Web servers. However, you can use IPSec on the back end of your network to protect data transmitted within your private network, and you can use EFS to encrypt data where it’s stored on a drive.
The access process will involve the following steps:
Users should be granted the Read permission on the home directory and the Scripts Only execute permission.
In the related directories, remove all unnecessary users and groups, keeping only the required administrative users and groups. Grant these users the Full Control permission. Add the Customers group, and grant that group the Read & Execute permission.
There are two basic perimeter network topologies:
You should use the back-to-back configuration.
You should use three firewalls—one in front of the Web servers, one between the Web servers and the data servers, and one between the perimeter network and the private network, as shown in the following illustration:
You can use Anonymous access for your customers because they aren’t required to provide credentials in order to log on to the system.
You need to grant the Read & Execute permission to the IUSR_computername account.
IPSec requires that both ends of the communication link be configured with Windows 2000.
You should use the single-firewall solution so that one firewall is configured with three NICs. One NIC is connected to the private network, one to the Internet, and one to the perimeter network.
You can create a counter log in Performance Logs and Alerts to collect the data. You can then use System Monitor to view that data.
You can create an alert in Performance Logs And Alerts. You can configure the alert with the Processor\% Processor Time counter and set the alert value to be over 80. When you configure the alert, you should configure the action so that you’re notified when the threshold is reached.
You should consider monitoring the following counters: System\Processor Queue Length, Processor\% Privileged Time, Processor\% User Time, and Process\% Processor Time.
You should use IIS logging to audit the Web site, and you should save your files in the W3C Extended log file format because this format allows you to specify which fields to include in your logs.
You can use a text editor such as Notepad to view the logs.
Before one of these events can be recorded in the Security log, you must configure the Audit Object Access audit policy in Group Policy. You should configure the policy to log successful attempts and failed attempts.
The most likely cause for failed events not appearing in the Security log is that either the directory properties weren’t configured to record failed events or the audit policy wasn’t configured to record failed events.
You should monitor available memory, paging, file system cache, paging file size, and memory pool size.
Inadequate memory can result in other parts of your system appearing as though the problems reside there. For example, what might appear on the surface as poor disk or processor performance can in fact be as a result of a memory problem. You should rule out memory performance problems before investigating other components.
Data about processor activity should include processor queue length and processor time percentages. Data about IIS connections should include the Web service and FTP service. Data about IIS threads should include thread count, processor time, and context switches.
Transmission rate data should include bytes sent and received by the Web service, FTP service, and SMTP service. You should also collect sent and received data about TCP segments, IP datagrams, and the network interface. TCP connection data should include information about established, failed, and reset connections.
You should monitor ASP requests and Web service GET and POST requests.
A long, sustained queue length indicates that a processor can’t handle the load assigned to it. As a result, threads are being kept waiting. A sustained queue length of two or more threads can indicate a processor bottleneck.
To configure audit policies, you must configure your Group Policy settings in the Group Policy snap-in. For each policy, you can configure successful attempts, failed attempts, or both.
You should configure the Audit Object Access policy to audit successful attempts and failed attempts.
You must set up auditing in the properties of the Inetpub\Scripts directory. To access the auditing properties, click Advanced on the Security tab of the Scripts Properties dialog box. You can configure auditing on the Auditing tab of the Access Control Settings For Scripts dialog box.
You use Event Viewer to view the Security log.
You can set up logging through the Internet Information Services tool. Open the properties for the specific site, and enable logging. You should use W3C Extended format for your log because this format allows you to specify which fields to log.
You can use a text editor such as Notepad to view data in a W3C Extended format.
IIS logging supports the following log file formats: Microsoft IIS, NCSA Common, ODBC Logging, and W3C Extended.
You disable logging on the Images directory through the properties for that directory (on the Directory tab of the Images Properties dialog box), which you access through the Internet Information Services tool.
You can use the Performance tool in Windows 2000 Server to establish a baseline and then monitor performance on an ongoing basis.
You don’t have enough physical memory on your server.
Configure the applicable audit policies in Group Policy to log only failed attempts.
You should use the ODBC Logging format.
You should back up your data to ensure against the loss of any critical system state data, files, or other data important to your system. Your backup strategy should include regularly scheduled backup jobs so that the data is as current as reasonably possible if you should need to restore that data.
You should try to simulate heavy network loads, heavy disk I/O, heavy use of file and application servers, and large numbers of users simultaneously logged on.