Building Your Own Server


Most readers of this book would think nothing about building their own PCs. The hardware is nearly fully standardized across the industry, the choices of component selection are straightforward, and the method for assembly is extremely well documented in numerous books and articles. The pitfalls are few.

However, when it comes to building a server, many of these same people hesitate. When building a server, there are more choices to make, and the equipment is more expensive than that for a PC, but the results can be equally satisfying. Because you've now read a number of chapters that describe how to select a motherboard, what the advantages are of one bus over another, and how casing influences your system configuration, we don't repeat that information here. Instead, the sections that follow speak to what kinds of servers are straightforward to build and why you might want to build your own.

If you are considering building a server, you shouldn't do so because you intend on saving a lot of money. You usually don't. You often can't build a PC for less than you'd spend on a major first-tier OEM, such as Dell or Hewlett-Packard. Components are a commodity, and OEMs buy them by the container load. The best you can hope for is to approach their pricing with your costs. Servers, on the other hand, tend to have higher markups. So it is possible to save some money building your own server. But saving money is still not the best reason to build a server.

The best reason to build your own server is that you can pick and choose your own components in a way that buying someone else's system doesn't allow. When you build your own server, you have more control over the setup of the system, and you are more likely to know what to do or replace when things go wrong. A home-built server is generally rather flexible to configure because you use industry-standard parts and not some proprietary type of casing, memory, or motherboard that will block your upgrade path in the future. Building your own server often results in equipment that has a longer duty cycle because it is more upgradable.

Building your own server makes sense for another reason: Although system builders make their servers competitively priced at purchase time, everything else they sell youfrom service to upgrade parts and even their shippingis priced at whatever the market will bear.

Server Purposes and Form Factors

If you have decided to build a server, the first place to start is determining the intended purpose of the server. That purpose should guide many of your selections. A good rule of thumb when building any system is to try to build a balanced system but oversize that system for any task that is the primary function of the server. A balanced system is one where each subsystem of the server is powerful enough to allow you to avoid bottlenecks. If the purpose of the server is to serve files, then the I/O subsystem needs to be emphasized.

Let's talk a little bit about balanced systems. It's impossible or at least too costly to size subsystems so that they never present a bottleneck. When a system gets a job that requires a lot of processing, chances are that the CPU is going to be 100% utilized until the job is near completion. The goal is to allow acceptable performance in those circumstances as well as to maintain a good average CPU utilization rate. That's one advantage that servers offer. If you find that your CPU utilization goes up, if you've selected the right motherboard, you can upgrade your processor or add another processor. What typically separates a server from either a workstation or PC is that servers are built with more upgrade options.

Realistically there are only a few different types of servers that are practical for most people to build themselves:

  • A basic server, which is really a souped-up PC

  • A workgroup server, featuring dual or four-way CPU boards

  • An SMB, or "small or medium-sized business" server, which is typically a dual processor system but built with a lower-performing I/O system

  • A thin rack-mountable system

It's possible to build a thin form factor system using readily available components for rack-mounted units. Two types of servers are difficult to build, given the current retail model:

  • SMP systems with more than four processors on the motherboard

  • Blade systems

Neither of these two types of system is standardized enough and in great enough demand from the average system builder that companies stock the parts needed to build them. If you intend to build either a large SMP or a blade system, you will find yourself spending time talking to the original parts manufacturers themselves. Chances are that for a single board or a limited number of boards, they aren't going to be much help to you.

Server Components

You've seen a wide variety of components described in this book. What are the features that a server requires from its components in order to provide a stable and dependable service to network clients? Intel markets server building to its system builders, using the marketing term "Real Server." A server should include the following:

  • Two- or four-way SMP support It is a good idea to have at least a two-CPU system, even if you choose to populate just one of the sockets. Having additional CPU sockets allows your system to grow over time.

  • A high-performance server chipset The motherboard chipset is of central importance to the overall performance of your system.

  • Large memory capacity and I/O bandwidth The amount of memory determines the number of clients you can support, as does your I/O bus.

  • High-performance network interfaces Without a strong network interface, you create an unnecessary bottleneck getting data in and out of your server.

  • Management features Because a server is supposed to be up and running reliably, it's important to be able to view what's happening on the system and make changes both locally and remotely.

  • A server operating system You don't want to limit the number of connections or any other properties of your system because you've used a desktop- or workstation-oriented operating system. Your equipment should be selected with the operating system in mind.

The preceding list represents a checklist for selecting your basic server components. They are necessary to create a server but are probably not sufficient for most purposes. Servers are differentiated from workstations or desktops through the addition of these features:

  • Redundant components that allow for graceful failover Redundant components used are power supplies, fans, and hard drives.

  • Hot-swappable components The most important hot-swappable components are disks, but it's valuable to be able to swap out other components.

  • High-performance and high-capacity storage The purpose of your server determines the amount of storage (and memory) that you need.

  • Intelligent RAID arrays that allow for advanced volume management RAID arrays make so many important volume operations possible that they are really a requirement for most servers these days.

All these requirements speak to the fundamental differences between workstations/desktops and servers. Servers must be much more dependable, must be higher performing, and must be flexible and have room to grow. Servers require a significantly higher investment than standard PCs, so you want to maximize your investment in them by selecting components that support the features described in this section.

Assembling a Server

When you have chosen your components, the next step is to assemble the system. In most respects, there isn't much difference between assembling a server and assembling a PC. There are a few critical differences, however, so in the following sections we discuss the building of a hypothetical server system step by step.

Installing Core Components

Let's begin by walking through the casing assembly, assuming that you've bought a case with no preassembled components already installed in it. Begin as follows:

1.

On a good work surface with adequate lighting, open your system case and remove any of the included screws, mounting brackets, disk cages, and other components that you will be populating with parts.

2.

Take a good look at your case to familiarize yourself with the design of the case and determine some fundamental issues, such as where the power supply goes and how many fans you need and of what type.

3.

Install the power supply. Don't skimp on the power supply. Your power supply should be sufficiently powerful to run all your components well below its stated operating limit and should have good thermal properties as well as being quiet. You can pay a lot for a server power supply, but it is money well spent.

4.

Install all your fans. The goal of your cooling system should be to create airflow through the case that removes heat. A lot of system builders position their fans so that intake is from the front of the case, and exhaust is out the back of the case. Some units, such as drive cages, may require their own fans.

5.

Install your motherboard into the case, as well as any backplane that your system might have.

6.

Install your CPU and then attach your cooler or heat sink on your CPU and plug it in to your motherboard CPU connection. Some people advise that you use a conductive wrist band in order to prevent static discharge from damaging memory or CPUs. Simply grounding yourself to a metallic surface (your case, for example) is sufficient.

7.

Attach your case's front panel connections (power, power LED, restart, HD LED, and speaker) to the connections on the motherboard. Visually inspect the connections close up to make sure that they are on the correct posts and that the polarities are correct.

8.

Insert one stick of memory into your system, usually on the slot closest to the CPU. (A system using a dual-channel memory controller may require that sticks of RAM be installed in matched pairs.)

9.

If your motherboard does not have a video chipset built in, install your video card into the appropriate slot: AGP, PCI-x, and so on.

10.

Attach a keyboard, a video monitor, and a mouse to your system or attach the appropriate leads to your KVM switch. KVM switches are covered further in Chapter 16, "Server Racks and Blades."

11.

Turn on your system and check that it proceeds to BIOS and stops at the point where it cannot find a boot device.

It's a good idea to test the basic system at step 11 because it becomes increasingly difficult to diagnose problems later on, when there are more potential causes. At this point, if there is an issue, you can try switching the memory or changing slots, swapping out the CPU or video board, changing power supplies, and reexamining your connections. In your initial boot to BIOS, you can determine whether your power LED is functioning, whether your restart button operates, and whether you are getting beep sounds.

Installing the Remaining Components

After you have installed the core components, you can add peripheral devices. It's a good idea to add two or three components at a time and then test the boot process to ensure that the additions aren't causing any problems. If you have the time and patience, you may prefer to test one component at a time. Here are a some of the steps you need to follow, but the order isn't as critical as in the prior list:

1.

Add the remaining memory. Memory is one of the most problematic components. It's a good idea to test your system again at full memory load to make sure there are no incompatibilities. If your BIOS is not going through a full display, turn off the fast BIOS startup and let the routine enumerate and check your entire memory.

2.

(Optional) Install your floppy drive, being sure that the red wire in the floppy connection is closest to the floppy power connection. (Not all floppy drives have this requirement.)

3.

Install any optical drive desired and connect the ATA/IDE, SATA, or USB connection, the audio connection, and the power connection to it.

4.

If you are installing a USB flash storage reader or any other USB storage device, install it but do not connect that device to your USB bus. The reason you don't want to connect the device is to avoid your operating system from grabbing a set of drive labels for these devices until you are done assigning devices to your more common drives.

5.

Install any PCI, PCI-x, or PCI-Express boards into the correct slots and connect any supported devices (such as hard drives) to those boards.

6.

Test your system to make sure all devices are recognized and enumerated by the BIOS.

At this point your server is fully populated and ready for configuration. If you are using hardware RAID, follow these steps:

1.

During the boot process, go into the RAID BIOS and create the RAID container.

2.

Add all drives that you intend to use in your RAID array and designate any additional drives that you want to keep as spares.

3.

After you create the container, use the RAID BIOS to create the hardware RAID type: RAID 5, RAID 1+0, and so on.

The process of creating the container is fast, but striping a large array can take some time.

Installing an Operating System

After your RAID has been configured, it is time to install your operating system and complete the installation. To install the operating system, follow these steps:

1.

Insert the operating system installation disc into your optical drive and then start your system.

2.

As your system proceeds past the BIOS and into the installation routine, make sure to specify correctly what your boot device will be and that you have the correct driver in hand to install so that your operating system will recognize the device.

In many instances, an operating system may come with the correct driver, but whenever possible, you should try to install the very latest driver from the vendor's website. If you are installing a RAID system, try to have that driver on hand for your operating system. For example, when Windows Server's installation routine starts up, it asks you to press F6 in order to load a driver(s). If you don't press that key in time, you may need to rerun the installation.

3.

As part of the installation routine, specify the boot partition and the type of drive formatting you wish to have on it. With Windows Server, you probably want NTFS; with Linux or UNIX, you may need to specify not only the partition and its type but also the number and definitions of any slices on the drive. Formatting a large volume takes some time, so this is a good time to attend to other tasks.

4.

Proceed through the installation, specifying the details of your network connections as well as the particular system components that you want to install.

Note

It is possible to bypass any installation step and install and configure system components and settings later on, after your server's operating system has completed its work.

5.

When your operating system is installed, shut down the system and connect your USB connections for any USB drives.

In building a server, the steps most people seem to have trouble with are creating the RAID container and not having the correct device driver in hand. Many RAID boards are poorly documented, and if you haven't done a system build before, you might not realize that you need to create a container first and stripe it prior to installing the operating system and then formatting the volume. The container and its striping are a hardware feature, separate from the formatting the operating system does.

With your server up and running, it's a good idea to let it burn in for 48 hours, monitoring your system temperature, which can usually be done using capabilities built into modern motherboards. In particular, you should monitor your CPU temperature to make sure it is not overheating.




Upgrading and Repairing Servers
Upgrading and Repairing Servers
ISBN: 078972815X
EAN: 2147483647
Year: 2006
Pages: 240

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net