Server Motherboard Components


As the previous section indicates, the power supply is a very important part of server design. Power supply designs have been developed in conjunction with motherboard designs to make sure there is enough power to operate the many components built in to, or plugged in to, modern server motherboards.

Most modern server motherboards have at least the following major components on them:

  • Processor socket(s)/slot(s)

  • Chipset (North Bridge/South Bridge or memory and I/O controller hubs)

  • Super I/O chip

  • ROM BIOS (Flash ROM/firmware hub)

  • DIMM/RIMM (RAM memory) sockets

  • PCI/PCI-X/PCI-Express bus slots

  • CPU voltage regulator

  • Battery

  • Integrated PCI or low-end AGP video

  • Integrated Fast (10/100) or Gigabit (10/100/1000) Ethernet

Many boards also have Serial ATA (SATA) or SCSI RAID interfaces onboard.

These standard components are discussed in the following sections.

Processor Sockets and Slots

Server motherboards often support two or more processors. Depending on the server and processor type, the processor might be installed in a ZIF socket for easy insertion and removal, a single-processor cartridge, or a proprietary multiprocessor cartridge. Most recent servers use socketed processors, but a few high-end servers use proprietary processor cartridges. For example, the Hewlett-Packard Integrity rx and Superdome series can use either standard Intel Itanium 2 processors or proprietary mx2 two-processor cartridges.

Typically, systems based on the ATX, BTX, and SSI motherboard form factors support up to four processors. Most systems with more than four processors use proprietary motherboard designs. Note that servers that use proprietary motherboards (primarily four-way or larger) often use proprietary processor boards. The processor board might provide enhanced cooling features not present with standard processor sockets. See the vendor's instructions for adding or removing processors in servers that use processor boards or cartridges.

See "Processor Socket and Slot Types," p. 62.


Single Versus Dual-/Multiple-Processor Sockets

One of the factors that influences what motherboard to use in a server is the number of processors you want it to support initially and in the future. A standard ATX motherboard can support up to two socketed processors. However, if more processors are needed, larger form factors must be used. Table 4.9 lists the maximum number of processors supported by industry-standard form factors.

Table 4.9. The Number of Processors Supported by Motherboard Form Factors

Form Factor

Maximum Number of Processors

ATX

2

Extended ATX

2

WTX

2

BTX

2

PICMG PCI-ISA

2

SSI EEB

4[1]

SSI CEB

2

SSI MEB

4[1]


[1] Some vendors support up to eight processors in this form factor with proprietary daughtercards.

Note that the number of processors supported by a blade server is the number of processors per blade multiplied by the number of server blades per chassis. Thus, if a chassis can support 10 server blades, and each server blade can hold two processors, the blade server chassis contains up to 20 processors.

Chipsets and Super I/O Chips

Although several server motherboards might have the same form factor, the processors, memory types, and other features they support are controlled by the chipset used by the motherboard designer. The chipset is the motherboard; therefore, any two boards with the same chipsets are functionally identical unless the vendor has added features to those provided by the chipset or removed support for certain chipset features.

The chipset contains the processor bus interface (called front-side bus [FSB]), memory controllers, bus controllers, I/O controllers, and more. All the circuits of the motherboard are contained within the chipset. Because the chipset controls all the major features of the system, including the processor, we recommend selecting the chipset as the first part of the server selection process.

Chipsets designed with server use in mind vary from desktop PC chipsets in several ways, including the following:

  • Support for memory-reliability technologies such as error-correcting-code (ECC) and registered SDRAM and DDR RAM

  • Support for PCI-X expansion slots (a faster, wider version of PCI that supports 64-bit operation and speeds up to 133MHz); PCI-X is backward compatible with PCI

  • Support for two-way and higher processor counts (when feasible); the ability to run multiple-processor configurations is affected by the processor used. For example, the Pentium 4, Pentium D, and AMD Opteron 1xx series support single-processor configurations only, and the Xeon DP and Opteron 2xx series support dual-processor configurations. The Xeon MP and Opteron 8xx support higher numbers of processors.

Major chipset vendors for servers include Intel, ServerWorks, AMD, and nVidia. See Chapter 3, "Server Chipsets," for more information.

The third major chip on many server motherboards is called the Super I/O chip. This is a chip that integrates legacy devices such as floppy controllers and serial, parallel, PS/2 mouse, and keyboard ports. Increasingly, the South Bridge chip incorporates the Super I/O chip's functions.

Memory: SIMM/DIMM/RIMM/SDRAM/DDR

You should consider the type and speed of memory used by a particular server or server-class motherboard when you select or build a server. Server memory differs from desktop PC memory in several ways:

  • Many servers still use PC100 or PC133 SDRAM rather than the newer DDR memory. Because its use is no longer widespread and less of it is produced, SDRAM memory is actually now more expensive than DDR memory. If you are upgrading multiple servers, the additional cost could be significant.

  • Most servers use registered memory rather than the unbuffered memory used by desktop PCs. Registered memory contains a small buffer chip to improve signal strength, which is very important for memory reliability in large modules.

  • Virtually all servers include ECC features in the chipset and BIOS. ECC uses the parity bit to correct single-bit memory errors and report larger memory errors. Many high-end servers include additional memory-reliability features such as hot-swapping memory and memory scrubbing.

Registered ECC memory modules are more expensive than the normal unbuffered, non-parity memory used by desktop PCs, but the extra reliability provided by these features makes the extra cost worthwhile.

The most common types of memory used by servers include the following:

  • PC100 or PC133 registered SDRAM with ECC

  • Various speeds of registered DDR SDRAM with ECC

  • Various speeds of registered DDR2 SDRAM with ECC

Older servers might use one of these types of memory:

  • PC66 registered SDRAM with ECC

  • Rambus RDRAM with ECC

  • EDO DRAM with ECC

For more information about memory types, see Chapter 5, "Memory."

Expansion Slots: ISA, PCI, AGP, and Others

Although today's servers have more integrated devices than ever before, the number and type(s) of expansion slots available is still an important factor to consider when building or buying a server.

The most common expansion slot types found on recent servers include the following:

  • PCI Although desktop computers normally use only the 32-bit/33MHz version of PCI, servers often use 64-bit/66MHz slots (which are backward compatible with 32-bit/33MHz versions).

  • PCI-X PCI-X runs at much faster speeds than PCI, making it an excellent choice for high-performance network adapters or RAID host adapters. PCI-X slots are backward compatible with PCI cards.

If you support or find yourself working on older servers, particularly those used in industrial applications, you might also encounter systems that still support ISA and EISA slots. On the other end of the spectrum, PCI-Express is an emerging technology found in some of the latest servers. Eventually you can expect it to replace PCI and PCI-X.

Finally, there's AGP. Although AGP is a very important bus type for desktop PCs, it is seldom found in servers except for low-end systems also suitable for workstation use.

The following sections discuss these slot designs in greater detail.

ISA Slots

Industry Standard Architecture (ISA) is the bus architecture that was introduced as an 8-bit bus with the original IBM PC in 1981; it was later expanded to 16 bits with the IBM PC/AT in 1984. ISA is the basis of the modern PC and was the primary architecture used in the vast majority of PC systems until the late 1990s. It might seem amazing that such a presumably antiquated architecture was used for so long, but it provided reliability, affordability, and compatibility, plus this old bus is still faster than many of the peripherals connected to it.

Note

The ISA bus hasn't been seen in either standard servers or desktop PCs for several years. However, it continues to be used in industrial computer (PICMG) designs. That said, it is expected to eventually fade away from those systems as well.


Two versions of the ISA bus exist, based on the number of data bits that can be transferred on the bus at a time. The older version is an 8-bit bus; the newer version is a 16-bit bus. The original 8-bit version ran at 4.77MHz in the PC and XT, and the 16-bit version used in the AT ran at 6MHz and then 8MHz. Later, the industry as a whole agreed on an 8.33MHz maximum standard speed for 8-/16-bit versions of the ISA bus for backward compatibility. Some systems have the capability to run the ISA bus faster than this, but some adapter cards do not function properly at higher speeds. ISA data transfers require anywhere from two to eight cycles. Therefore, the theoretical maximum data rate of the ISA bus is about 8MBps, as the following formula shows:

8.33MHz x 2 bytes (16 bits) ÷ 2 cycles per transfer = 8.33MBps

The bandwidth of the 8-bit bus would be half this figure (4.17MBps). Remember, however, that these figures are theoretical maximums. Because of I/O bus protocols, the effective bandwidth is much lowertypically by almost half. Even so, at about 8MBps, the ISA bus is still faster than many of the peripherals connected to it, such as serial ports, parallel ports, floppy controllers, keyboard controllers, and so on.

Figure 4.24 describes the pinouts for the full 16-bit ISA expansion slot (8-bit ISA cards plug in to the top portion of the slot only), and Figure 4.25 shows how the additional pins are oriented in the expansion slot.

The dimensions of a typical AT expansion board are as follows:

  • 4.8 inches (121.92mm) high

  • 13.13 inches (333.5mm) long

  • 0.5 inches (12.7mm) wide

Figure 4.24. Pinouts for the 16-bit ISA bus.


Figure 4.25. The ISA 16-bit bus connector.


The EISA Bus

The EISA standard was developed primarily by Compaq in 1988. EISA was the company's attempt at taking over future development of the PC bus from IBM. Compaq formed the EISA committee, a nonprofit organization designed specifically to control development of the EISA bus, and provided the bus designs freely to other vendors. Unfortunately, very few EISA adapters were ever developed. Those that were developed centered mainly around disk array controllers and server-type network cards.

The EISA bus was essentially a 32-bit version of ISA that provided full backward compatibility with 8-bit or 16-bit ISA cards. EISA cards use automatic configuration via software.

The EISA bus added 90 new connections (55 new signals plus grounds) without increasing the physical connector size of the 16-bit ISA bus. At first glance, the 32-bit EISA slot looks a lot like the 16-bit ISA slot. However, the EISA adapter has two rows of stacked contacts. The first row is the same type used in 16-bit ISA cards; the other, thinner row extends from the 16-bit connectors. Therefore, ISA cards can still be used in EISA bus slots. Although this compatibility was not enough to ensure the popularity of EISA buses, it is a feature that was carried over into the desktop VL-Bus standard that followed. The physical specifications of an EISA card are as follows:

  • 5 inches (127mm) high

  • 13.13 inches (333.5mm) long

  • 0.5 inches (12.7mm) wide

The EISA bus can handle up to 32 bits of data at an 8.33MHz cycle rate. Most data transfers require a minimum of two cycles, although faster cycle rates are possible if an adapter card provides tight timing specifications. The maximum bandwidth on the bus is 33MBps, as the following formula shows:

8.33MHz x 4 bytes (32 bits) = 33MBps

Figure 4.26 describes the pinouts for the EISA bus. Figure 4.27 shows the locations of the pins; note that some pins are offset to allow the EISA slot to accept ISA cards. Figure 4.28 shows the card connector for the EISA expansion slot.

Figure 4.26. Pinouts for the EISA bus.


Figure 4.27. Pin locations inside the EISA bus connector.


Figure 4.28. The EISA bus connector.


PCI, PCI-X, and PCI-Express

In early 1992, recognizing the need to overcome weaknesses in the ISA and EISA buses, Intel spearheaded the creation of another industry group: the PCI Special Interest Group (PCI-SIG). This group was formed with the same goals as the VESA group in relation to the PC bus.

The PCI bus specification was released in June 1992 as version 1.0 and since then, it has undergone several upgrades. Table 4.10 shows the various releases of PCI.

Table 4.10. PCI Specifications

PCI Specification

Release Date

Major Change

PCI 1.0

June 1992

Original 32/64-bit specification

PCI 2.0

April 1993

Defined connectors and expansion boards

PCI 2.1

June 1995

66MHz operation, transaction ordering, latency changes

PCI 2.2

Jan. 1999

Power management, mechanical clarifications

PCI-X 1.0

Sept. 1999

133MHz operation, addendum to 2.2

Mini-PCI

Nov. 1999

Small form factor boards, addendum to 2.2

PCI 2.3

March 2002

3.3V signaling, low-profile add-in cards

PCI-X 2.0

July 2002

266MHz and 533MHz operation, supports subdivision of 64-bit data bus into 32-bit or 16-bit segments for use by multiple devices, 3.3V/1.5V signaling

PCI-Express 1.0

July 2002

2.5GBps per lane per direction, using 0.8V signaling, resulting in 250MBps per lane; designed to eventually replace PCI 2.x in PC and server systems


Servers typically offer a mixture of PCI and PCI-X or PCI-X and PCI-Express slots. The specifications for different types of PCI slots are listed in Table 4.11.

Table 4.11. PCI Bus Types

PCI Bus Type

Bus Width (Bits)

Bus Speed (MHz)

Data Cycles per Clock

Bandwidth (MBps)

PCI

32

33

1

133

PCI 66MHz

32

66

1

266

PCI 64-bit

64

33

1

266

PCI 66MHz/64-bit

64

66

1

533

PCI-X 64

64[1]

66

1

533

PCI-X 100

100[1]

66

1

800

PCI-X 133

64[1]

133

1

1,066

PCI-X 266

64[1]

133

2

2,132

PCI-X 533

64[1]

133

4

4,266

PCI-Express[2]

1

2,500

0.8

250

PCI-Express[2]

16

2,500

0.8

4,000

PCI-Express[2]

32

2,500

0.8

8,000


[1] Bus width on PCI-X devices can be shared by multiple 32-bit or 16-bit devices.

[2] PCI-Express uses 8b/10b encoding, which transfers 8 bits for every 10 bits sent and can transfer 132 bits at a time, depending on how many lanes are in the implementation.

Aiding performance is the fact that the PCI bus can operate concurrently with the processor bus; it does not supplant it. The CPU can be processing data in an external cache while the PCI bus is busy transferring information between other parts of the system; this is a major design benefit of the PCI bus.

The PCI specification identifies three board configurations, each designed for a specific type of system with specific power requirements; each specification has a 32-bit version and a longer 64-bit version. The 5V specification is for stationary computer systems (using PCI 2.2 or earlier versions), the 3.3V specification is for portable systems (also supported by PCI 2.3), and the universal specification is for motherboards and cards that work in either type of system. 64-bit versions of the 5V and universal PCI slots are found primarily on server motherboards. The PCI-X 2.0 specifications for 266 and 533 versions support 3.3V and 1.5V signaling; this corresponds to PCI version 2.3, which supports 3.3V signaling. PCI-X slots also support PCI cards.

Unlike older card designs, such as ISA and EISA, PCI does not use jumper blocks or DIP switches for configuration. Instead, software or a Plug and Play (PnP) BIOS does the configuration. This was the model for the Intel PnP specification. True PnP systems are capable of automatically configuring the adapters.

PCI-SIG developed PCI-Express during 20012002, based on the 3GIO draft high-speed bus specification originally developed by the Arapahoe Work Group (a work group led primarily by Intel). The initial PCI-Express 1.0 specification was released in 2002. However, the first systems to use PCI-Express slots did not appear until 2004.

The key features of PCI-Express are as follows:

  • Compatibility with existing PCI enumeration and software device drivers

  • Physical connection over copper, optical, or other physical media to allow for future encoding schemes

  • Maximum bandwidth per pin, which allows small form factors, reduced cost, simpler board designs and routing, and reduced signal integrity issues

  • An embedded clocking scheme, which enables easy frequency (speed) changes compared to synchronous clocking

  • Bandwidth (throughput) that can increase easily with frequency and width (lane) increases

  • Low latency, suitable for applications that require isochronous (time-sensitive) data delivery, such as streaming video

  • Hot-plugging and hot-swapping capabilities

  • Power management capabilities

PCI-Express, like other high-speed interfaces, such as SATA, USB 2.0, and IEEE 1394 (FireWire or i.LINK), uses a serial bus for signaling. A serial bus sends 1 bit at a time over a single wire at very high speeds. Serial bus signaling avoids the problems caused by parallel buses such as PCI and PCI-X, which must synchronize multiple bits sent simultaneously and may have problems with jitter or propagation delays.

PCI-Express is a very fast serial bus design that is backward compatible with current PCI parallel bus software drivers and controls. In PCI-Express, data is sent full-duplex (that is, via simultaneously operating one-way paths) over two pairs of differentially signaled wires called a lane. Each lane allows for about 250MBps throughput in each direction initially, and the design allows for scaling from 1 to 2, 4, 8, 16, or 32 lanes. The most common configurations in PCs and servers are x1 (one lane), x4, and x16.

For example, a high-bandwidth configuration with eight lanes allowing 8 bits to be sent in each direction simultaneously would allow up to 2000MBps bandwidth (each way) and use a total of only 40 pins (32 for the differential data pairs and 8 for control). Future increases in signaling speed could increase that to 8000MBps each way over the same 40 pins. This compares to PCI, which has only 133MBps bandwidth (one way at a time) and requires more than 100 pins to carry the signals. For expansion cards, PCI-Express takes on the physical format of a smaller connector that appears adjacent to any existing PCI slots on the motherboard. Figure 4.29 shows how PCI-Express x1 and x16 slots compare to 33MHz PCI and 133MHz PCI-X expansion slots.

Figure 4.29. PCI-X, PCI-Express x16, PCI, and PCI-Express x1 slots compared to each other.


PCI-Express uses an IBM-designed 8-bitto10-bit encoding scheme, which allows for self-clocked signals that easily allow future increases in frequency. The starting frequency is 2.5GHz, and the specification allows increasing up to 10GHz in the future, which is about the limit of copper connections. By combining frequency increases with the capability to use up to 32 lanes, PCI-Express will be capable of supporting future bandwidths up to 32GBps.

PCI-Express is designed to augment and eventually replace many of the buses currently used in PCs and servers. In addition, it will replace video interfaces such as AGP and act as a mezzanine bus to attach other interfaces, such as SATA, USB 2.0, IEEE 1394b, Gigabit Ethernet, and more. Currently, PCI-Express is used alongside PCI-X and PCI slots, as Figure 4.29 suggests.

Because PCI-Express can be implemented over cables as well as onboard, it can be used to create systems constructed with remote "bricks" that contain the bulk of the computing power. Imagine the motherboard, processor, and RAM in one small box, hidden under a table, with the video, disk drives, and I/O ports in another box, sitting out on a table within easy reach. This will enable a variety of flexible PC form factors to be developed in the future without compromising performance.

For more information on PCI-Express, you can consult the PCI-SIG website (www.pcisig.org).

The AGP Bus

The AGP bus was created in 1996 by Intel specifically for high-performance graphics and video support. Although the PnP BIOS treats AGP like a PCI slot in terms of IRQ and other hardware resources, it uses different connectors and is otherwise separate from PCI.

Table 4.12 lists the various versions of AGP that have been developed for PCs. Although you might find AGP slots in some servers, systems with AGP slots are primarily workstation or PC systems. Currently, servers don't need anything other than basic 2D GUI graphics for system management, and therefore most servers use PCI-based graphics, such as the ATI Rage XL, on the motherboard.

Table 4.12. AGP Versions and Specifications

AGP Version

Speed

Voltage

Notes

1.0

1x, 2x

3.3V

Not compatible with AGP 1.5V slots

2.0

4x

1.5V

Systems with universal AGP sockets can accept AGP 1x/2x cards

3.0

8x

1.5V

Systems with universal AGP sockets can accept AGP 1x/2x cards


We recommend using AGP video in a server only if you cannot use PCI videofor example, if you run out of PCI slots and your server does not incorporate video. If you need to use AGP video, keep in mind the differences between AGP slots:

  • Most recent AGP video cards are designed to conform to the AGP 4X or AGP 8X specification, each of which runs on only 1.5 volts.

  • Most older motherboards with AGP 2X slots are designed to accept only 3.3V cards.

If you plug a 1.5V card in to a 3.3V slot, both the card and motherboard could be damaged, so special keys have been incorporated into the AGP specification to prevent such disasters. Normally, the slots and cards are keyed such that 1.5V cards fit only in 1.5V sockets, and 3.3V cards fit only in 3.3V sockets. However, universal sockets do exist that accept either 1.5V or 3.3V cards. The keying for the AGP cards and connectors is dictated by the AGP standard, as shown in Figure 4.30.

Figure 4.30. AGP 4X/8X (1.5V) card and AGP 3.3V, universal, and 1.5V slots.


As you can see from Figure 4.30, AGP 4X or 8X (1.5V) cards fit only in 1.5V or universal (3.3V or 1.5V) slots. Due to the design of the connector and card keys, a 1.5V card cannot be inserted into a 3.3V slot.

Caution

Some AGP 4x/8x-compatible motherboards require you to use 1.5V AGP 4x/8x cards only; be sure to check compatibility between the motherboard and the AGP card you want to buy to avoid problems. Some AGP 4x/8x-compatible slots use the card retention mechanism shown in Figure 4.31. Note that AGP 1x/2x slots have a visible divider not present on the newer AGP 4x/8x slot. AGP 4x slots can also accept AGP 8x cards and vice versa.


Figure 4.31. AGP standard (1x/2x), AGP 4x, and AGP Pro slots compared. AGP 4x and AGP Pro can accept AGP 1x, 2x, and 4x cards. AGP 4x and AGP Pro slots can also accept AGP 8x cards.


Server/workstation motherboards with AGP slots might use a variation known as AGP Pro, now in version 1.1a. AGP Pro defines a slightly longer slot with additional power pins at each end to drive bigger and faster AGP cards that consume more than 25 watts of power, up to a maximum of 110 watts. AGP Pro slots are backward compatible, meaning that a standard AGP card can plug in, and a number of motherboard vendors have used AGP Pro slots rather than AGP 4x slots in their products. Because AGP Pro slots are longer, an AGP 1x/2x card can be incorrectly inserted into the slot, which could damage it, so some vendors supply a cover or an insert for the AGP Pro extension at the rear of the slot. This protective cover or insert should be removed only if you want to install an AGP Pro card.

Figure 4.31 compares the standard AGP 1x/2x, AGP 4x, and AGP Pro slots.

CPU Voltage Regulators

Because virtually all x86 processors run on a fraction of 3.3V DC (the lowest power level available from a server's power supply), server and PC motherboards alike feature voltage regulators. Occasionally the voltage regulator is built in to a removable daughtercard, but in most cases, the voltage regulator is built in to the motherboard. The voltage regulator uses a series of capacitors and coils and is usually located near the processor socket(s) or slot(s). Figure 4.32 shows the location of a typical voltage regulator on a typical server motherboard.

Figure 4.32. Voltage regulator components on a typical server motherboard.


If you plan to use an aftermarket active or passive heatsink to cool your processor(s), you should be sure to check the clearance between the processor socket(s) and the voltage regulator's components. Some voltage regulators are located so close to the processor socket(s) that extra-large heatsinks cannot be used.

BIOS Chips

The BIOS chip in any given server might take one of the following forms:

  • A socketed chip

  • A surface-mounted chip

The BIOS chip provides the interface between the operating system and the motherboard's onboard hardware. In virtually any server built in the past decade, the BIOS chip's contents can be updated through software. BIOSs that support software updating are known as flash BIOS chips.

BIOS settings are stored in a separate chip known as the CMOS chip or as the nonvolatile RAM/real-time clock (NVRAM/RTC) chip.

For more information on configuring the system BIOS, see "ROM BIOS," p. 286.


CMOS Battery

A small battery on the motherboard, often called the CMOS battery, maintains the BIOS settings stored in the CMOS and maintains clock timing when the server is turned off.

If the CMOS battery's voltage falls below minimum amounts, the system clock loses time, and eventually CMOS settings are lost. When that happens, the BIOS returns to its default settings the next time power is restored. Most recent servers use the same CR2032 3V lithium watch battery used by most recent desktop PCs. However, some older servers use other battery types, including rechargeable Ni-Cad battery packs or chips that incorporate battery and CMOS RAM/RTC. See your server or motherboard manual for details.

Figure 4.33 shows a BIOS chip and CMOS battery on a typical server motherboard.

Figure 4.33. Voltage regulator components on a typical server motherboard.





Upgrading and Repairing Servers
Upgrading and Repairing Servers
ISBN: 078972815X
EAN: 2147483647
Year: 2006
Pages: 240

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net