Planning the Terminal Server Hardware Platform


Prior to purchasing the servers and installing the operating system, the first question that must be answered is how many servers are needed. The art of determining how many and what size servers are required for a server-based computing infrastructure has long been argued and discussed with more disagreement than agreement. The disagreement surrounds the fact that both applications and end users vary greatly from organization to organization in terms of required resources, how they use the resources, how often they use them, and how well behaved they are (that is, do the applications or users have memory leaks, large bandwidth requirements, crashing problems, or other difficulties). Chapter 10 detailed how to build a pilot environment and the need for testing prior to implementation. Testing is absolutely essential to providing an organization with the basics of whether applications will perform in an SBC environment, and whether they will scale to multiple users. However, the problem with a simple test in a small pilot environment is that most applications, networks, and servers do not scale linearly indefinitely (due to the large number of variables just listed), making it unreliable to simply extrapolate based on a small testing environment. Thus, if an organization plans to scale an SBC to the enterprise with 500 users or more, we strongly recommend simulating a larger number of users. A good test plan is to build a test environment to simulate at least 10 percent of the eventual expected number of concurrent users to understand and estimate how many servers are needed. Using our case study as an example, Clinical Medical Equipment Corporation (CME) (introduced in Chapter 10), with 3000 total users and 2500 concurrent users, we will need to test 250 concurrent users. Prior to detailing how to perform large-scale simulations, a discussion on server hardware and operating system installation is worthwhile.

The following section discusses server hardware in more depth and provides some examples of what size servers to start with and what server components to consider.

Server Hardware

Since an SBC environment puts all users at the mercy of the servers, server hardware has always been a major point of discussion. Four years ago, there was a dramatic performance, reliability, and support difference between "white box" servers and servers sold by the top three server players (then HP, Compaq, and IBM). Ever since, major industry changes in the form of dramatic cost reduction of the hardware, Intel hardware standardization, and the globalization of third-party support, have all come together to dramatically level the playing field of server hardware manufacturers. Today, Dell consistently competes head-to-head with HP (which now incorporates Compaq) and IBM, and we have found that many white-box vendors produce reliable hardware with almost identical components to HP, Dell, and IBM. Although the risk of destroying an SBC project by choosing the wrong server hardware platform is now lower than it was four years ago, we still highly recommend choosing a provider that is Windows Server 2003 certified by Microsoft, and that has proven priority onsite, 24/7 support.

Note

It is important to keep each server in the farm as similar as possible, because variations in hardware can lead to the need for additional images or scripting work. Thus, when purchasing servers, buy sufficient quantities to account for items like future growth (plan at least one year out), redundancy requirements, and test systems.

Central Processing Units

The number of processors, the amount of memory, and I/O speed will all influence the number of users that can run applications on a server. Since enterprises will be running servers in a load-balanced farm, the number of users per server must be balanced against the number of servers in the farm. Additionally, because of the shared DLL environment of Windows 2000 and 2003, some applications may have conflicts with other applications, memory leaks, or other programmatical deficiencies, thus requiring additional servers to house separate applications. Consequently, a greater number of low-scale servers will provide more fault-tolerance and application flexibility (and if they crash, fewer users will be affected), while a smaller number of highly scaleable servers (say, 4-, 16-, or 32-processor servers) will be simpler to manage and will take less space, HVAC, and power in the data center. For all but the largest environments, we have found a good compromise, both based on cost and functionality, to be two-processor servers with 2–4GB of RAM in a 2U rack-based form factor running Windows Server 2003, Standard Edition. A two-processor server (P4 Zeon) or better will provide excellent performance for 20–60 users, depending on the application suite (see the server-sizing discussion that follows). For larger enterprises (2500 users or more), blade servers or highly scalable servers should be considered in order to minimize the data center requirements and daily management activities. Additionally, as 16–32 processor servers become more commonplace, the use of VMware Server or Microsoft Connectix Virtual Server products to virtualize 4–16 Windows Server 2003 servers on one hardware machine may become economically advantageous. The determining factor will be in how HP, IBM, and other high-end server manufacturers price these highly scalable hardware platforms. Regarding our case study, CME Corp., the decision is not obvious, as the 2500 concurrent user count can go either way, depending on how many users we can fit on a server, available data center space, and current cost of the hardware options. Since the most significant variable is the number of users per server, the discussion later in this chapter on how to perform a more precise server sizing test is critical.

Memory

Nearly all servers today come with ECC (Error-Correcting Code) memory, and most have a maximum capacity of 4–6GB in a basic configuration. Windows Server 2003 Standard Edition will accept 4GB of memory, and the 32-bit Enterprise and Datacenter Editions support 32GB and 64GB respectively. As stated earlier, we only recommend the use of highly scalable servers (four processors or more, 32GB or more of memory) in SBC environments with over 2500 users in conjunction with a Virtualization product (for example, VMware Server or Microsoft Connectix Virtual Server), as the Virtualization reduces the risk of having hundreds of users impacted by one blue-screen or fatal software error.

Network Interface Cards

Most servers today come with Gigabit networking built-in, and in most cases, dual Gigabit networking. If a network card needs to be added to a server, we recommend only using the "server" type—that is, those NICs that have their own processor and can offload the job of handling network traffic from the CPU. We also recommend using two NICs in a teaming configuration to provide additional bandwidth to the server as well as redundancy (if one network card fails, the server remains live since it can run off of the remaining live card).

Note

Most 10/100 NICs have the ability to autonegotiate between speeds and full- and half-duplex settings. We have experienced significant problems with this in production, especially when mixing NICs and network backbone equipment from different vendors. Thus, we strongly recommend nailing the cards to 100Mbit full-duplex (assuming that the server NICs plug into a 100Mbit switch), and standardizing it on all equipment. See Chapter 6 for a more detailed discussion of network design and requirements.

Server Hard Drives

The hard drive system plays a different role with terminal servers than it does for standard file servers. In general, no user data is stored or written on a terminal server, and a server image will be available for rebuild, so the main goal when designing and building the hard drive system for a terminal server is read speed and uptime. We have found hardware RAID 5 to be a cost-effective approach to gaining both read speed and uptime (if any one of the drives fails, the server will remain up). RAID 5 requires a minimum of three drives and a hardware RAID controller. We recommend the use of the smallest, fastest drives available (18GB, 15K RPM at the time of this writing.)

Another option that is becoming affordably priced and offers even greater speed and reliability is solid state drive systems. Because solid state drives do not have moving parts, they can be up to 800 percent faster and dramatically more reliable than EIDE or SCSI drives. We suspect that as vendors increase reliability and the cost of solid state systems decrease, they will become common place in SBC environments.

Other Hardware Factors

The following are related recommendations for an SBC hardware environment:

  • Power supplies Server power supplies should be redundant and fail over automatically if the primary unit fails.

  • Racking All server farms should be racked for safety, scalability, and ease of access. Never put servers on unsecured shelves within a rack.

  • Cable management Clearly label or color code (or both) all cables traveling between servers and the network patch panel or network backbone. It will save a tremendous amount of time later when troubleshooting a connection.

  • Multiconsole Use a multiconsole switch instead of installing a monitor or keyboard for each server. It saves space, power, and HVAC.




Citrix Metaframe Access Suite for Windows Server 2003(c) The Official Guide
Citrix Access Suite 4 for Windows Server 2003: The Official Guide, Third Edition
ISBN: 0072262893
EAN: 2147483647
Year: 2003
Pages: 158

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net