Complex Clusters


If higher cluster reliability is a concern—and this is where cluster technology can be extremely beneficial—several options are available to reduce system downtime to mere seconds:

  • To reduce the risk of disk failures, they can be shadowed.

  • To reduce the risk of disk and tape controller failures, use double-ported disk controllers.

  • To reduce the risk of server CPU-to-disk path and CPU-to-tape path failures, replicate the path.

  • To reduce the risk of server CPU failures, include additional CPUs using the replicated device paths.

  • To reduce the risk of CPU-to-CPU communication failure, replicate the paths.

  • To reduce the risk of data center failures, replicate the data center.

A large variety of supported hardware options are available to implement any portion or all of the previous list. These options run across the cost-performance spectrum and are described briefly as follows. No matter what particular solution is selected, recovery from any of these failures takes only seconds to perform. That is, if any of the failures listed occur, the cluster will reconfigure itself in at most seconds and often faster.

The following hardware implementations can be used to connect cluster nodes together:

  • Asynchronous Transfer Mode (ATM). 155 megabits per second with 2-kilometer separation or 622 megabits per second and 300-meter separation; 96 node maximum

  • Ethernet. 10 megabits per second with 100-meter separation, 100 megabits per second with 100-meter separation, or 1,000 megabits per second and 550-meter separation; 96 node maximum

  • Fiber Distributed Data Interface (FDDI). 100 megabits per second with 40-kilometer separation; 96 node maximum

  • Computer Interconnect (CI). 140 megabits per second (70 MB/s on each path) and 45-meter separation; 32 node maximum. This interconnect requires a star coupler.

  • DIGITAL Storage Systems Interconnect (DSSI). 32 megabits per second and 8-meter separation; four node maximum. Four integrated storage elements (ISEs)—that is, disk controllers—can also be attached.

  • Memory Channel (MC). 800 megabits per second and 2-meter separation; four node maximum

  • Small Computer System Interface (SCSI). 160 megabits per second and 25-meter separation; 16 node maximum

  • Fibre Channel (FC). 1,000 megabits per second and 100-kilometer separation; 96 node maximum

In addition to interconnecting CPUs, some of these hardware solutions will also connect to disk and tape controllers. Called hierarchical storage controllers (HSCs), these are intelligent controllers designed to support RAID operations (level 0 striping, 1 shadowing, 5 striping with parity), as well as support cluster operations. OpenVMS will support these RAID operations in the event that HSCs are not used. Thus, HSCs provide multiple-path disks and shadowed and/or striped disks.




Getting Started with OpenVMS System Management
Getting Started with OpenVMS System Management (HP Technologies)
ISBN: 1555582818
EAN: 2147483647
Year: 2004
Pages: 130
Authors: David Miller

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net