PLANNING THE TERMINAL SERVER HARDWARE PLATFORM

Prior to purchasing the servers and installing the operating system, the first question that must be answered is how many servers are needed. The art of determining how many and what size servers are required for a Terminal Server/Presentation Server infrastructure has long been argued and discussed with more disagreement than agreement. The disagreement surrounds the fact that both applications and end users vary greatly from organization to organization in terms of required resources, how they use the resources, how often they use them, and how well-behaved they are (that is, do the applications or users have memory leaks, large bandwidth requirements, crashing problems, or other difficulties). Chapter 10 detailed how to build a pilot environment and the need for testing prior to implementation. Testing is absolutely essential to providing an organization with the basics of whether applications will perform in a Terminal Services/Presentation Server environment, and whether they will scale to multiple users. However, the problem with a simple test in a small pilot environment is that most applications, networks, and servers do not scale linearly indefinitely (due to the large number of variables just listed), making it unreliable to simply extrapolate based on a small testing environment. Thus, if an organization plans to scale a Terminal Server/Presentation Server environment to the enterprise with 500 users or more, we strongly recommend simulating a larger number of users. A good test plan is to build a test environment to simulate at least 10 percent of the eventual expected number of concurrent users to understand and estimate how many servers are needed. For example, an organization with 3,000 total users and 2,500 concurrent users will need to test 250 concurrent users. Prior to detailing how to perform large-scale simulations, a discussion on server hardware and operating system installation is worthwhile.

The following section discusses server hardware in more depth and provides some examples of what size servers to start with and what server components to consider.

Server Hardware

Since a Terminal Server/Presentation Server environment puts all users at the mercy of the servers, server hardware has always been a major point of discussion. Although server hardware has become commoditized, new advances today in dual core processor infrastructure, 64-bit processors, and 64-bit Windows and Citrix products, along with the need for localized hardware support (any English speaker who has called an Indian call center for support can speak to the need for localized call centers, as can an Indian IT worker who has ever spoken to a support person in a call center in Tennessee), provide differentiators that should be considered .

Note 

It is important to keep all the servers in the farm as similar as possible, because variations in hardware can lead to the need for additional images or scripting work. Thus, when purchasing servers, buy sufficient quantities to account for items like future growth (plan at least one year out), redundancy requirements, and test systems.

Central Processing Units

The number of processors in a server, in conjunction with the amount of memory, will most influence the number of users that can run applications on that server. Since enterprises will be running servers in a load-balanced farm, the number of users per server must be balanced against the number of servers in the farm. Prior to application and server isolation technologies, the shared DLL environment of Windows Server 2003 meant that some applications would inevitably conflict with other applications or else exhibit memory leaks or other programmatic deficiencies; thus, additional servers would be needed to house separate applications. Consequently, a greater number of low-scale servers provided more fault-tolerance and application flexibility. But this has all changed with Citrix's application isolation technology introduced in Presentation Server 4, and with VMware's more recent support of Terminal Services. These technologies mean that enterprises are no longer constrained by applications and are thus free to make the decision on server sizing entirely based on economics. Historically a smaller number of highly scalable servers (say, 4-, 16-, or 32-processor servers) were far more expensive, even when space, HVAC, and power in the data center were taken into consideration, than a larger number of simple, two-processor servers. But the recent introduction of AMD and Intel dual-core processors has changed the equations. Both AMD and Intel have introduced very aggressively priced 64-bit dual-core , dual-processor servers (for a total of four usable processors), with ample memory capacity to scale very efficiently . Industry benchmarks, as well as early tests from Citrix e Labs, indicate that the efficiency of dual-core processors, combined with efficient memory paths and 64-bit architecture, are yielding 100-400 percent improvements in scalability. Our early tests show nearly three times the number of users per server on a dual-core, dual-processor server, versus a standard 32-bit quad-processor server. It is important to note, though, that even if you aren't considering brand-new dual-core processor infrastructure, any newer server will far outrun any three-year-old technology, because of the major hardware differences introduced in the last three years , including gigabit network cards, faster drive arrays and hard drives , and faster memory and processors.

Memory

Nearly all servers today come with ECC (error-correcting code) memory, and most have a maximum capacity of 8-16GB in a basic configuration. Windows Server 2003 Standard edition will accept 4GB of memory, and the 32-bit Enterprise and Datacenter Editions support 32GB and 64GB, respectively.

Network Interface Cards

Most servers today come with Gbit networking built-in, and in most cases, dual Gbit networking. If a network card needs to be added to a server, we recommend using only the "server" typethat is, those network interface cards (NICs) that have their own processor and can offload the job of handling network traffic from the CPU. We also recommend using two NICs in a teaming configuration to provide additional bandwidth to the server as well as redundancy (if one network card fails, the server remains live, since it can run off of the remaining live card).

Note 

Most NICs have the ability to autonegotiate between speeds (10/100/1000 Mbit) and full-and half-duplex settings. We have experienced significant problems with this in production, especially when mixing NICs and network backbone equipment from different vendors . Thus, we strongly recommend nailing the cards to 100 or 1000 full-duplex ( assuming that the server NICs plug into a 100 or 1000 Mbit switch), and standardizing it on all equipment. See Chapter 6 for a more detailed discussion of network design and requirements.

Server Hard Drives

The hard drive system plays a different role with terminal servers than it does for standard file servers. In general, no user data is stored or written on a terminal server, and a server image will be available for rebuild, so the main goal when designing and building the hard drive system for a terminal server is read speed and uptime. We have found hardware RAID 5 to be a cost-effective approach to gaining both read speed and uptime (if any one of the drives fails, the server will remain up). RAID 5 requires a minimum of three drives and a hardware RAID controller. We recommend the use of the smallest, fastest drives available (36GB, 15K RPM at the time of this writing.)

Another option that is becoming affordably priced and offers even greater speed and reliability is solid-state drive systems. Because solid-state drives do not have moving parts , they can be up to 800 percent faster and dramatically more reliable than EIDE or SCSI drives. We suspect that as vendors increase reliability and the cost of solid-state systems decrease, they will become commonplace in most data center environments.

Other Hardware Factors

The following are related recommendations for a Citrix hardware environment:

  • Power supplies Server power supplies should be redundant and fail over automatically if the primary unit fails.

  • Racking All server farms should be racked for safety, scalability, and ease of access. Never put servers on unsecured shelves within a rack.

  • Cable management Clearly label or color -code (or both) all cables traveling between servers and the network patch panel or network backbone. We cannot emphasize this enough. It will save a tremendous amount of time later when troubleshooting a connection.

  • Multiconsole Use a multiconsole keyboard-video-mouse (KVM) switch instead of installing a monitor or keyboard for each server. It saves space, power, and HVAC. IP KVMs have recently become more available and cost effective.



Citrix Access Suite 4 for Windows Server 2003. The Official Guide
Citrix Access Suite 4 for Windows Server 2003: The Official Guide, Third Edition
ISBN: 0072262893
EAN: 2147483647
Year: 2004
Pages: 137

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net