Rack Features and Components


Server racks are custom designed, and you need to give some thought to the features you need in your racks to support the components they contain. Among the special considerations are the components that dissipate heat, fans, HVAC systems, and rack placement. You also need to consider how to mount systems into the racks, and if the server wasn't made specifically to be rack mounted, you need to consider how to convert the server so that it can be made to fit. You will learn about rails, mounts, and conversion kits in the sections that follow.

Fans and Airflow

When you think of a rack of servers, you should think in terms of each processor being the equivalent of a 100-watt light bulb. Not only do rack-mounted servers throw off a lot of heat, but they consume a significant amount of power. Mechanical devices such as hard drives add more heat, as do power supplies and other components. A dense server rack can give off more heat per square foot than an oven, which means that anyone contemplating this kind of deployment must pay particular attention to fans, airflow, and the cooling plant used. Whereas a standard server design might give off something like 2kW (kilowatt) to 4kW, a dense server rack could give off 15kW to 20kW.

The amount of air that a server rack can draw is often specified by a manufacturer. An alternate specification calls for maintaining a specific temperature inside the rack, as measured by a temperature probe. In hot server rooms, it can be difficult to maintain temperature, so it is important to pay attention to the components you use in server racks, how the racks are vented, and how much forced cooling is required.

You often have control over the number of fans and their placement in a server rackwhether the enclosure is sealed or missing a walland other factors. Many server rack designs allow you to use ceilings and/or floors that accommodate fans. Many rack designs, even the ones that are enclosed, offer the option of perforated doors or adjustable vents.

It's not just that racks require cooling in the servers themselves; the whole environment must be cooled. In many rooms containing rack servers, the top of the rack is 10°F to 15°F hotter than the floor.

Because high temperatures shorten equipment lifetimes and diminish system performance, there's a lot of incentive to lower server room temperatures. Indeed, many computer systems are designed to partially turn off by lowering the number of clock cycles that are executed as a certain temperature is reached. Today, cooling solutions within a rack are fan based, but some companies are considering switching dense rack solutions to a water-cooling system, something the mainframes of old used to use.

Many companies try to address the heat problem by using raised floors and running cooling systems under the floor. Server racks can add so much extra heat to a server room that large airflows through the ceiling may be required to achieve enough cooling to avoid system failure (and overheated IT staff).

As a rule of thumb, you should figure that each kW of heat generated should be dissipated by about 2 to 3 cubic feet of air per second. A completely populated 42U server rack might therefore require as much as 15 to 25 cubic feet per second. That velocity of airflow requires that modern server rooms with rack-mounted servers pay particular attention to good airflow design by reducing obstructions such as ceiling pipes and obstructions under a raised floor.

With so much air flowing, it is actually possible to create a Venturi effect, in which large airflow under the floor prevents air from flowing upward and instead simply pulls air into the raised floor. Without an upward flow of air, the air at the top of the room doesn't mix with cooler air and stays overheated. If this is problem in your server room implementation, you probably need to consult a specialist to help you design an appropriate solution.

Conversion Kits

Not all servers are built to fit into server racks. In fact, most servers aren't. If you have server rack envy, despair not: Several companies sell rack conversion kits that add to your server the necessary railing and brackets to it to allow it to be mounted in a universal rack. The price for a kit of this type is usually between $100 and $200.

One company selling these types of solutions is RackSolutions (www.racksolutions.com). One example of a RackSolutions kit is the one shown in Figure 16.8, which can be pulled out of the rack enclosure via a set of rails. This particular conversion kit is meant to fit into a relay frame and can support the 165 ProLiant DL760 D2.

Figure 16.8. RackSolutions offers a number of rack conversion solutions that let you make a tower or desktop server rack mountable.


Some conversion kits, like those covered in the next section, are nothing more than simple rail additions.

Rails

Mounting rails are another standard component of server racks. Universal mounting rails come with square holds, and the EIA standard mounting rails come with round holes that are 0.3125-inch tapped and are spaced apart in the following sequence:

  • 0.5 inch between holes 1 and 2

  • 0.625 inch between holes 2 and 3

  • 0.5 inch between holes 3 and 4

  • 0.625 inch between holes 4 and 5

  • 0.5 inch between holes 5 and 6

  • 0.625 inch between holes 6 and 7

  • 0.5 inch between holes 7 and 8 (and so forth)

If you have rack-mountable equipment, you can use the measurements listed here to figure out whether your equipment will fit into a standard rack and which holes you must use.

The standard server rack unit 1U is measured as the distance from halfway between holes 1 and 2 and halfway between holes 4 and 5. The standard server rack unit 2U is measured as the distance from halfway been holes 1 and 2 and halfway between holes 7 and 8. Figure 16.9 illustrates this measurement.

Figure 16.9. Hole spacing on a universal rail is related to standard rack sizes.


Most of the networking and server equipment you can buy conforms to the spacing requirements of the EIA standard mounting rail's hole spacing, but often you can fit equipment into a universal mounting rail by using a set of cage nuts and screws with washers.

Cable Management

Dense server deployment has one special problem associated with it that anyone with a PC and lots of accessories can appreciate: There are so many wires that need to be connected to the servers that it can be almost impossible to connect them all, let alone keep track of which wire goes to which server, which wire connects to which server port, which wire connects to which switch, and so forth.

The solution to this problem is called cable management, and it's every bit as important as any other design consideration when it comes to this form of deployment. Pity the poor IT admin who has to find a broken connection without some sort of system in place.

Cable management solutions for data centers often use different colors of wires. Each color is bundled together. In a good cable management system, each wire is also labeled with an identification number that aids in the quick location of any particular instance of a connection. Of course, all assignments need to be documented so that the people using the system later on can use it effectively.

Blade Servers

Several years ago it became possible to reduce a standard PC to a single board that can fit into a specially designed chassis that accommodates 6, 12, or more cards. From this simple concept was born a form factor that has come to be called blade servers. A blade server contains the processor, Northbridge and Southbridge chipsets, an Ethernet interface, and memoryand not a whole lot more.

Companies such as Cubix (www.cubix.com) began almost 10 years ago to offer chassis that at first were large boxes containing shared storage and I/O. Eventually, blade server designs shrank to standard 3U (5.25 inches) chassis, then 2U (3.5 inches), and finally even 1U (1.75 inches) chassis. Today Cubix offers a product called the BladeStation, shown in Figure 16.10, that is a 6U chassis or backplane that can stack up to seven dual Xeon processor blades, four SCSI drives, and an appropriately sized power supply into each system. Starting with a single blade, you can buy a BladeStation system for an entry price below $3,000.

Figure 16.10. The 6U Cubix BladeStation shown here in front view can take up to six dual Xeon blades.


Today you can visit many data centers that maintain large server farms, such as the ones ISPs and search engine sites maintain, and walk down aisle after aisle containing rack after rack of blade servers, with hundreds or thousands of servers. Blades are the perfect answer for webserver applications where scale-out is necessary to get sufficient I/O to support a large number of client connections and where there is a system of load balancing in place to maximize the efficiency of the hardware, using an IP director such as F5 Networks's BIG IP.

What blade servers do, of course, is provide for extraordinary densification of the data center. Blades come in two form factors:

  • Stackable pizza boxes

  • Chassis with slide-in blades

Figure 16.11 shows a Dell PowerEdge 1850, which is a pizza box 1U server. This type of server is attached directly into the rack. Figure 16.12 shows a Dell PowerEdge 1855, which uses a 6U chassis arrangement to stack 10 blades within the chassis.

Figure 16.11. The Dell PowerEdge 1850 is a 1U pizza box server that stacks horizontally in a universal rack.


Figure 16.12. The Dell PowerEdge 1855 is a 6U cabinet that holds 10 blade servers.


The blade server architecture does even more than that, though. Blade servers provide a means to better utilize shared components, such as power supplies, UPS devices, optical drives, and storage arrays. They even make it easier to reduce the overall cable count and are an ideal form factor for creating fault-tolerant server systems. When a blade fails, the system shifts its computing load over to the working blades, allowing you to pop out the failed blade and pop in a replacement. Such an incident may incur a performance hit for network resources accessing the server, but that is far better than having the entire system crash.

Figure 16.13 shows a Hewlett-Packard ProLiant BL35p server blade. The blade contains a hard drive in the front, dual processors, and a network interface at the back.

Figure 16.13. The Hewlett-Packard ProLiant BL35p dual processor blade is a complete server.


Almost all the major server hardware companies now offer blade server designs. Some examples are IBM's heavily advertised eServer BladeCenter designs, some entries in the Dell PowerEdge series, NEC's Express series, and some entries in the Hewlett-Packard ProLiant series. There are no standard designs for blade server cards, so when you buy an eServer, for example, you are committed to buying cards from IBM. You usually find that blade server chassis offer some standard PCI-X or PCI slot(s) so that you can add a firewall or other standard component.

KVM Switches

A KVM switch is a switch that consolidates the keyboard (K), video (V), and mouse (M) for a number of servers into a single switch. A KVM switch makes it possible to use one keyboard, monitor, and mouse to control one server at a time, via a keystroke or an onscreen command. A KVM switch is a very important component in a server rack because there is no other way to accommodate a number of monitors, keyboards, and mouse devices in the physical space available. Not only can you free up space, but you can eliminate heat, lower the complexity of cabling, and lock down your servers by controlling them from a secure room, using a single console.

Many network operating systems now allow what is called headless operation, which is just a fancy way of saying that you can remotely control the system without a monitor. However, there are issues that make remote sessions problematic. When the server crashes, or when you need to change a setting in the server's BIOS, you cannot fix these problems via a remote session.

The server shown in Figure 16.3 has a KVM switch built into the it, with the KVM part installed as part of the rack-mounted keyboard, using a feature called an OSD (onscreen display). The OSD is part of the KVM switch, so even when the server itself isn't available, you can still access the OSD, and the OSD can access a server's BIOS as it starts up. A KVM switch, on the other hand, offers you access to each and every server's physical port and therefore direct access to the BIOS. That makes using a KVM switch the preferred means of controlling multiple servers when you can be in reasonable proximity to them.

With today's KVM technologies, the boxes look more like dense network switches, and with cable run lengths of as long as 1,000 feet, you can use a KVM switch in another room or another floor of a building. You can find switches with as many as 32 connections, and you can extend that number even higher because many of these switches have the ability to stack and connect multiple switches to one another. Some KVM switches come with very sophisticated management features, mimicking features you find on networks.

An example of KVM management software is Avocent's DSView Management software (see www.avocent.com/web/en.nsf/Content/DSView3Software). This software can manage KVM-connected devices over an IP network or internetwork, and it gives you BIOS-level access to your servers. That means that if a server fails, you can reboot it, enter the BIOS, and modify its settings. With this type of software, you can autodiscover servers, authenticate users and systems, perform scripts and macros, and handle other tasks.

KVM switches can be as small as a deck of cards, as is the case with most two-port and four-port models. As the KVM switches control more servers, the physical connections required to connect each server to the switch's backpanel connection require that the KVM switch grow in size. It's possible to purchase large rack-mounted KVM switches with 50 and more connections for dense server deployments. Those larger systems are housed in 2U or 4U rack-mounted panels. Some KVM networks can provide a fan-out with theoretical connections of up to 64,000 servers.

In purchasing a KVM switch, you should look for the following features:

  • Heterogeneous OS compatibility You need to make sure that any KVM switch you purchase is compatible with the operating systems in use on your servers. Even if a switch advertises that it is compatible with a range of system types, there may be problems in practice that require optional converters, and these converters add cost and eat precious server rack space.

  • Keepalive features A keepalive feature allows a switch to continue to send signals to servers, using the server's power, if the primary power to the KVM switch fails.

  • Firmware upgradabilty Some systems can have their BIOS upgraded, often remotely and automatically.

  • Video bandwidth The video bandwidth of KVM switches varies, and it can depend on cable run lengths.

  • Software and hardware switching A KVM switch should allow switching in both hardware and software.

  • Scanning features Scanning displays one server at a time so that you can periodically check all your servers at regular intervals. Some scanners display multiple servers' output onscreen at one time, with regular refreshes, and allow you to switch to any server of interest.

  • Scalability and flexibility KVM switches should allow you to fan out to add more connections as well as reduce connections, as needed. It's best if this can be done as a hot-swap operation.

  • Security features Some KVM switches provide encryption, password protections, and physical key access.

As more and more rack-mounted servers require remote administration, there has been a need to create remotely accessible KVM switches. Thus a number of vendors now offer KVM over IP solutions. What makes a KVM over IP solution so useful is that if a server crashes or if you need to make configuration changes in the BIOS, a KVM over IP switch allows you to reboot the system and access the system's BIOS. You can access the BIOS by using KVM over IP, just as you can when you are standing in front of the input devices attached to the KVM switch.

Raritan (www.raritan.com) is one well-known enterprise KVM switch manufacturer. Its Paragon II analog KVM series can be purchased and stacked, and it allows up to 64 simultaneous users to access up to 10,000 servers. Raritan's enterprise class one- to four-digital-channel KVM over IP appliance, the Dominion KX, offers KVM over IP remote access technology. It allows you to remotely administer 32 servers and other devices from a browser anywhere and still view the BIOS of the connected systems. Among the features of this unit are encryption, remote power control, dual-homed Ethernet connections, LDAP, RADIUS, Active Directory, syslog integration, and Web management of the switch.

One important feature for KVM over IP systems is that you can remotely access them when your network is down. You want secondary access through a modem so that you can perform out-of-band management. When you have this, you can reboot your servers even when the network connection is down and you can't get it up because the network connection depends on a computer such as a firewall.

Among the companies offering KVM switches are AMI, APC, Avocent, Belkin, Black Box, Minicom, Network Technologies, Inc., Raritan, Rose, StarTech, and Tripp Light. Of these companies, Avocent and Raritan are considered to be market leaders in the enterprise KVM switch space.

Cable Management Using KVM Switches

Many KVM switches are mounted in a server rack just as any other component is. Larger switches may take up to 2U of space, but some are thin 1U models. After you have installed the KVM switch, the next step is to connect the switch to the servers in the rack.

In a 72U server rack that contains 24 2U servers, you need to connect the following cables for your servers:

  • 24 to 48 one- or two-network cables per server (for dual-homed systems)

  • 24 to 48 power connections (for dual power supplies)

  • 24 to 48 server-to-storage connections (1 for SCSI, 2 for Fibre Channel)

  • 24 keyboard connections

  • 24 mouse connections

  • 24 video connections

  • 24 USB-to-UPS connections

This means that for just the servers, you are connecting approximately 200 cables. Add connections for KVM switches, UPS devices, storage arrays, tape drives, and any other components you have in the rack, and the number probably approaches 250 for a server rack of this type.

It is absolutely critical that you organize these connections and that you use consolidated components such as KVM switches to bundle together different connections to individual servers as much as possible. KVM switch cables are usually bundled together into a single wire for each system and are color coded for the RJ-45 connections for mouse and keyboard. Most KVM switch vendors set up their switches so that a proprietary connector is used to connect the switch to the switch cable.

KVM switches can be connected to a keyboard, a mouse, and a monitor in a system tray at a comfortable level in the server rack. Many data centers prefer to connect KVM switches to remote consoles, and that requires special KVM output wires and extenders for analog KVM switches. Although mouse devices and keyboard signals can run several hundred feet, video signals tend to degrade over cablehence the need for extenders. For KVM over IP switches, the long runs are over CAT5 UTP cable, the signal is digital, and there is no degradation because the video is rebuilt at the controller system at the other end of the wire.




Upgrading and Repairing Servers
Upgrading and Repairing Servers
ISBN: 078972815X
EAN: 2147483647
Year: 2006
Pages: 240

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net