A.7 IO Bus Architecture and IO Devices

     

PA-RISC supports a memory-mapped IO architecture. This simplified architectures allows connected devices to be controlled via LOAD and STORE instructions as if we were accessing locations in memory. There are two forms of IO supported: Direct IO and Direct Memory Access (DMA) IO. Direct IO is the simplest and least costly to the system because it is controlled directly with LOAD and STORE instructions but generates no memory addresses. DMA IO devices, on the other hand, control the transfer of data to or from a range of contiguous memory addresses and is more prevalent than Direct IO devices. PA-RISC organizes its devices by having them connected to one of a number of IO interfaces (known as Device Adapters), which in turn connect to the main memory bus via a bus adapter or bus converter . The main difference between a bus adapter and a bus converter is that a bus adapter is required where we have a non-native PA-RISC bus, while a bus converter is required where we are changing speed from one bus to another. To support a large and varied number of device adapters, PA-RISC supports a large and varied number of bus adapters and bus converters . Overall, IO throughput will be governed by the throughput of all these interfaces. If our main memory bus does not have sufficient bandwidth, then no matter how fast our IO devices are, we won't be able to get the data into memory and, hence, onto the CPU. The morale of the story is this:

  1. Have a good understanding of the IO requirements for each of your systems; if you are using 2Gb Fibre Channel interface cards, will you need the full performance capacity of all those cards all of the time? If so, will your IO bus need to be able to sustain that amount of throughput?

  2. Ensure that your hardware vendor supplies realistic performance figures for all devices involved in the IO path , all the way from the main memory bus to the IO bus involved. This normally means the sustained bandwidth as opposed to the peak bandwidth.

  3. Test, test, and test again.

We don't have the space or time to go through every server in the HP family and discuss the merits of the IO architecture of each of them, so we generalize to an extent and look at one example. Figure A-19 shows a simplified diagram of an HP rp7400.

Figure A-19. Simplified block diagram of HP rp7400 server.
graphics/ap01fig19.gif

As mentioned previously, there is nothing we can do regarding the bandwidth of the CPU/system and memory bus. In the case of the rp7400, we can choose the CPU and the amount of memory we install. We can also choose which interface cards (Device Adapters) to install. In an rp7400, as is the case with all new HP servers, the underlying IO architecture is PCI. HP currently supports PCI Turbo, PCI Twin-Turbo, and PCI-X.

PCI Turbo (PCI 2X) : This interface card is a 5V 33MHz interface card. With a 64-bit data path, a PCI Turbo card is able to supply approximately 250 MB/s transfer rates. These cards can be inserted only in a PCI 2X slot due to the keying on the bottom rail of the card itself.

PCI Twin-Turbo (PCI 4X) : This interface card is a 3.5V 66MHz interface card. With a 64-bit data path, a PCI Turbo card is able to supply approximately 533MB/s transfer rates. These cards can be inserted only in a PCI 4X slot due to the "keying" on the bottom rail of the card itself.

PCI-X : These cards either run at 66MHz (PCIX-66) or 133Mhz (PCIX-133), and all cards use 3.5V keying. However, 5V cards are not supported. The throughput for these cards is 533MB/second and 1.06GB/second accordingly .

Universal Card : These cards can be inserted in either a PCI 4X or PCI 2X slot due to the "keying" on the bottom rail of the card supporting both speeds and voltages. The card will adopt the speed of the slot into which it is inserted.

The PCI cards themselves communicate via a Lower Bus Adapter (LBA) ASIC (Application Specific Integrated Circuit), which deals with requests for individual cards. This LBA communicates over an interface known as a rope : PCI-2X and PCIX-66 require one rope and PCI-4X and PCIX-133 need two ropes to communicate with the IO controller known as a System Bus Adapter (SBA).

We would then have to consider the performance of each of the interface cards (device adapters) themselves. For instance, using a 1Gb/s Fibre Channel card is going to perform at a maximum of 128 MB/s. Should we use this in a PCI-2X slot or a PCI-4X slot? In this case, it may make more sense to use a PCI-2X slot leaving the faster slots for higher performing cards. If we look at the design of the SBA IO controller, it can sustain 2.2GB/s overall throughput. We have four PCI-4X cards plus two PSI-2X cards. To really push an rp7400 SBA, we would have to load all slots with cards, which could push the LBA chip to its maximum throughput. The designers have obviously realized that without this capacity the IO subsystem would be a significant bottleneck for this server. As we see more and more high-performance interface cards, we will see a progression to the faster PCI-X interface on HP servers. This will require us to rethink the performance of our IO controllers themselves.

We have mentioned Fibre Channel, and we talk about it again in our discussions regarding solutions for High Availability Clusters. Fibre Channel appears to be the preferred method for attaching disk drives to our servers, although all of the various SCSI standards are supported.



HP-UX CSE(c) Official Study Guide and Desk Reference
HP-UX CSE(c) Official Study Guide and Desk Reference
ISBN: N/A
EAN: N/A
Year: 2006
Pages: 434

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net