5.2. Bus Performance

 < Day Day Up > 

Communication between the major subsystems occurs by reading and writing data across the buses in the system. Buses are designed with distinct standards applied throughout the system to move the data between subsystems effectively and to avoid or minimize bottlenecks.

Devices that require more than two clock cycles to respond to processor requests slow down the system during the time the processor is accessing those devices.

5.2.1 Bus Speed

Bus speed is the number of bus cycles that occur per second. It is measured in hertz (Hz). Bus speed is also called frequency.

Just as the bus width differs between buses, the bus speed can also be different between buses within the same system. Each subsystem in a server has its own operating frequency and communicates based on timing rules.

To communicate and transfer data between each component, the timing among components must be synchronized to a system heartbeat. The buses must also adhere to this timing rule and must be able to synchronize to the bus clock. The bus clock is controlled by the chipset and is a sub-multiple of the system clock frequency.

5.2.2 Bus Cycles

A bus cycle is the transfer process a component such as a processor, chipset, or bus master device uses to communicate or move data across a bus.

The bus activity required to transfer information includes first, a sequence of control signals and addresses on the address bus, and second, a movement of data on the data bus.

The basic types of bus cycles include memory reads, memory writes, I/O reads, and I/O writes.

All data transfers occur as a result of one or more bus cycles. A bus cycle is performed each time the processor or bus master needs code or data from memory or an I/O device.

In a standard bus cycle, the processor sends an address to memory on one clock tick and the memory returns the data from that address on a second clock tick. Each bus cycle takes a total of two clock ticks.

Memory can take longer than one clock tick to get the data on the data bus. When this happens, the processor must wait for the data. This is known as a wait state.

5.2.3 Maximum Transfer Rate

The amount of data that can flow across a bus during a period of time is called the maximum transfer rate. Maximum transfer rate is one way of measuring the performance of a server.

The transfer rate is given in bytes transferred per second. It is determined by this formula: Maximum transfer rate = [speed (MHz) x width (bytes)]/number of clock cycles per transfer.

Figure 5-2 shows the maximum transfer rates for a 400MHz bus, with a width of 4 bytes (32 bits) given a single wait state and a zero wait state.

Figure 5-2. Maximum transfer rate calculations.


As indicated by the master transfer rate formula, you can increase performance by either increasing the numerator (speed x width) or decreasing the denominator (clock cycles per transfer), as shown in Figure 5-3.

Figure 5-3. Ways to increase performance.


5.2.3.1 INCREASING THE NUMERATOR

You can increase the numerator in the maximum transfer rate formula by any of the following methods:

  • Increase the speed at which bus cycles take place. This usually means increasing the clock speed of the processor.

  • Increase the speed at which devices, especially system memory, can communicate with the processor. This involves implementing high-speed memory or adding a cache.

  • Increase the width of the data bus to increase the amount of information passed in a single bus cycle.

  • Implement modified bus cycles, such as a burst cycle.

  • Add concurrent processes, such as dual independent buses or multiprocessing.

5.2.3.2 INCREASING DEVICE SPEED

As mentioned previously, one way to increase performance is to increase the speed of the devices in the system. These types of changes are common. As new processors are introduced, for example, they often feature support for higher bus speeds. It is important to remember that both the processor and the bus must be able to support the higher rate. Putting a faster processor on a slow bus will not increase performance.

Another way to increase performance is to widen the bus. A wider bus means there are more electrical traces on which data can travel.

5.2.3.3 IMPLEMENTING BURST CYCLES

Beginning with the 80486 processor family, all Intel processors support burst cycles for any data request that requires more than one data cycle.

Note

Address time is not usually included in this transfer rate.


In a standard zero wait-state bus cycle, the address is sent on one bus cycle and data sent on the next. That means data is transferred on every other clock tick, as shown in Figure 5-4.

Figure 5-4. Zero wait state.


In a zero wait-state burst cycle, the first address is sent and a series of four data transfers occur one after another, as illustrated in Figure 5-5. A second address is sent at the same time that the fourth data transfer occurs. On the very next cycle, the data at the second address is transferred back followed by an additional three transfers. As a result, after the first address is sent, data is transferred on every clock cycle.

Figure 5-5. Burst transfer.


At a minimum, a burst transfer requires two clock cycles for the first data transfer (A1 + D1) and can be followed by up to three subsequent data transfers (D2 + D3 + D4). Throughput during a burst cycle might vary depending on many factors.

5.2.3.3.1 Burst Transfer Rate

Calculate the burst transfer rate by taking the total amount transferred in the total burst and dividing it by the total data time, as shown in Figure 5-6.

Figure 5-6. Calculating the burst transfer rate.


Note

The address time is not included in the transfer rate calculated by this formula.


5.2.3.4 IMPLEMENT BUS MASTERING

A bus master is a device connected to the bus that communicates directly with other devices on the bus without going through the processor.

Bus mastering is the protocol used when the processor gives control of the bus to an I/O device. Bus mastering allows the I/O device to transfer data directly to memory. The processor does not have to act as a mediator on the bus, which frees the processor to perform other tasks and increases the speed of the data transfers.

The components that can serve as the bus master are the processor, the DMA controller, the memory refresh logic, and the EISA/PCI bus master card.

5.2.3.5 IMPLEMENT BUS ARBITRATION

If multiple bus master devices request a bus simultaneously, a bus controller acts as the arbitrator. Bus controllers determine which component gets control of the bus at any given time.

Each bus has its own bus controller. For system buses, such as the memory bus and local I/O buses, the controllers are located in the chipset. For expansion devices, the controller can be located in another application-specific integrated circuit (ASIC) on the system board or located on an expansion card.

     < Day Day Up > 


    HP ProLiant Servers AIS. Official Study Guide and Desk Reference
    HP ProLiant Servers AIS: Official Study Guide and Desk Reference
    ISBN: 0131467174
    EAN: 2147483647
    Year: 2004
    Pages: 278

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net