Data Center Evolution

   

In the past, there were mainframes. There was usually only one of them for a company or a data center. The mainframe had a set of criteria: How much power it needed, how much heat it would give off per hour , how large it was, and how much it weighed. These criteria were non-negotiable. If you satisfied these criteria, the machine would run. If you didn't, it wouldn't run. You had one machine and you had to build a physical environment it could live in.

Fast forward to the 21st century. Computers have become a lot faster and a lot smaller. The data center that used to house just one machine now holds tens, hundreds, perhaps thousands of machines. But there is something that hasn't changed. Each of these machines still has the same set of criteria: power, cooling, physical space, and weight. There is also an additional criteria: network connectivity. These criteria still need to be satisfied and they are still non-negotiable.

So, now you have different types of servers, storage arrays, and network equipment, typically contained in racks.

How can you determine the criteria for all the different devices from the different vendors ? Also, whether you are building a new data center or retrofitting an existing one, there are likely to be some limits on one or more of the criteria. For example, you might only be able to get one hundred fifty 30 Amp circuits of power. Or you might only be able to cool 400,000 BTUs per hour. This is an annoying and frequent problem. Creating RLU definitions will give you numbers to add up to help you decide how many racks you can support with these limitations.

Until recently, data centers were populated with equipment based on using a certain wattage per square foot which yielded an amount of power available to the equipment. This could also be used to roughly determine the HVAC tonnage needed to cool the equipment. Unfortunately, using square footage for these decisions assumes power and cooling loads are equal across the entire room and does not take the other requirements of the racks, or the number of racks, into consideration. This worked when a single machine such as a mainframe was involved. The modern data center generally uses multiple machines and often these are different types of devices with different specifications. There are also different densities of equipment within the different areas of the data center.

For example, consider figure 4-1 which shows a modern data center room layout:

Figure 4-1. Using Square Footage to Determine Cooling Needs

graphics/04fig01.gif

The areas represented by dotted lines are areas with different RLU definitions. This is necessary because each of the three sections has its own power, cooling, bandwidth, space, and weight requirements (or limitations).

The previous figure shows only the HVAC requirements as an example. If you total up the cooling needs of all three sections the total is 2,844,000 BTUs per hour. Divide this number by the square footage of the room (24,000 sq ft) and you get 118.50 BTUs per hour of cooling per square foot. This would be far too much for the PCs that need only 46 BTUs per hour of cooling per square foot, but far too little for both the Sun Fire 6800 and Sun Fire 15K servers that need 220 and 162 BTUs per hour of cooling per square foot, respectively. Therefore, it's clear that determining the HVAC capacity needed by using the total square footage in the room won't work in most data centers.

This leads us into the new frontier: RLUs.

   


Enterprise Data Center Design and Methodology
Enterprise Data Center Design and Methodology
ISBN: 0130473936
EAN: 2147483647
Year: 2002
Pages: 142
Authors: Rob Snevely

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net