Avoiding the Pitfalls


Simply buying the biggest, meanest server you can with the most memory isn't always the best plan for a server. Scaling without planning can often result in more problems than it fixes. By avoiding the pitfalls associated with scaling technologies to handle larger loads, you can build an environment that is not only high performance but also low maintenance.

Buying the Wrong Hardware

All too often administrators purchase a server upgrade for a specific application to run on and it ends up being slower than the old system. As counterintuitive as this might seem, it demonstrates a lack of understanding of some types of applications. Knowing the idiosyncrasies of an application is critical to purchasing hardware for it. An application that is Floating Point Unit “intensive will respond favorably to a system with a large L2 cache. Moving an FPU-intensive application to a newer server with a higher clock speed but lower L2 cache can result in the application actually being slower. Applications that are write- intensive , such as databases, often run faster on independent disks than they do on a RAID 5 subsystem because of the parity check involved on disk writes . Whenever possible you should discuss server selection with the vendor of the application the server is being purchased for and get concrete performance numbers for various hardware configurations. Often the hardware vendors can supply performance numbers for popular applications on their hardware as well. Clever administrators arm themselves with as much information as possible before making hardware purchases.

Is the Application Multiprocessor-Capable?

Many times servers are purchased with multiple processors. All too often people believe that if one processor is good, two must be better. Often the additional processors can add a significant amount of performance to a system. Unfortunately, not all applications are able to take advantage of multiple processors. Before purchasing a multiprocessor server that is destined to run a specific application you should research the application and determine if it will take advantage of the additional processor. If it won't, you should consider taking the money saved on the secondary processor and upgrading the primary processor.

If a multiprocessor system is inherited and the application it will run is not able to take advantage of the processor there are still ways to improve performance over a single processor server. Through the use of Processor Affinity you can assign a particular process to run on a specific processor. Any threads spawned by the process will automatically inherit the affinity. This means that a particular application can be assigned to the second processor while the first processor handles all the other Windows- related tasks . In this way the application is not affected by the operating system and runs faster than it could have on the first processor alone.

Windows System Resource Manager

Windows System Resource Manager enables you to limit not only the amount of resources used by an application but also tailor its usage to a specific processor. By limiting less important applications and tailoring the desired application of a specific processor, applications can be made to run faster and further scale their ability to support end users.


Protecting Against System Outages

One of the pitfalls of buying big servers is the tendency to load up the basket with multiple eggs. Although consolidating servers into a single powerful server has been shown to reduce costs in the IT environment it also opens the door to single points of failure. A clever administrator understands that part of scaling an environment is ensuring that individual servers don't get out of hand.

Administrators often fall victim to the affordability of disk space. When a server runs low on space it is very easy to add more disks to the chassis or add an external chassis. This creates two dangerous situations. First of all there is a single server holding a tremendous amount of important data. This makes it very difficult to perform maintenance on the server. If it is a database server the sheer volume of the data might result in database maintenance taking longer than an available maintenance window. If the server fails there will be many users who need to access the data that is now unavailable. It is critical to determine at what point it makes sense to scale up the server by adding a server. Technologies such as DFS, which hide the physical server structure, enable an administrator to add file servers to an environment without altering user configurations or mappings.

The second danger to allowing servers to bloat before splitting off more servers is backup and restore. If a file server has so much data that it would take more than eight hours to restore it, it is probably time to split off data onto a separate server.

Although server consolidation is a good thing, don't fall into the trap of consolidating blindly. Look at the capacity of your backup and restore system and determine the most capacity you can restore in a reasonable period of time. If the data is going to overshoot that number (you've been monitoring disk space, right?) its time to add a server.

Ensuring that Your Facilities Can Support Your Systems

By and large, administrators are experts in the area of technology. They understand servers, they understand IO, and they understand applications. They spend all their time thinking about the next great server and how to tweak it for maximum performance. Ask an administrator how many amps his server draws on startup and how many BTUs of heat it produces and his eyes will go dull.

Far too often administrators purchase server hardware without regard for how it will affect facilities. Knowing how much power the servers draw and knowing how the electrical circuits in the data center are provisioned is critical to avoiding unnecessary system outages. Knowing things like HVAC capacity of the data center is critical in making informed decisions about hardware. It's depressing to have a 4TB SAN arrive only to find out the data center can't support its electrical or cooling needs. That can be an expensive oversight.

It's also important to avoid falling into the trap of always adding servers. It used to be a very common practice to scale Web sites by simply adding more and more Web servers. It was not uncommon to hear of sites that had more than a thousand front-end Web servers. Even with 1U servers that is 42 servers per rack. That's 24 racks of servers. A typical rack with servers installed takes up six square feet of space. That's 144 square feet just for Web servers. Companies with data centers will understand the cost associated with that amount of data center space. Factor in the 1000amp current draw and the amount of heat generated and you will quickly realize that blindly adding servers isn't the best method.

Look Beyond the Plug

When considering 220V hardware it is important to not only determine whether the data center can support 220V but to look beyond the plug. Determine whether you can support 220V at your UPS. If there is a recovery site you must ensure that it can also support the 220V devices.




Microsoft Windows Server 2003 Insider Solutions
Microsoft Windows Server 2003 Insider Solutions
ISBN: 0672326094
EAN: 2147483647
Year: 2003
Pages: 325

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net