10.6 Chapter Summary


Long-Distance Storage Networking Applications

  • Natural and unnatural disasters can have devastating effects on data centers and the loss of critical data.

  • Most companies have remote replication technologies that are costly and cumbersome to manage.

  • Remote data replication consists primarily of archiving and mirroring.

  • Archiving typically uses removable media and is transported offsite.

  • Mirroring operates over metropolitan- or wide-area distances and allows a second site to take over if the first site becomes unavailable.

  • Most mirroring technologies have been based on Fibre Channel that, due to performance limitations, could be implemented only over relatively short distances.

10.1 IP Storage Networking Expands Remote Data Replication

  • Limitations of Fibre Channel switches are practical as much as technological.

  • Virtually all private and public networks use IP as their communications protocol, so any other protocol must be converted to access these networks.

  • New multiprotocol storage networking switches can convert Fibre Channel to IP, eliminating previous limitations and restrictions.

  • Multiprotocol switches have large buffers that augment Fibre Channel switch buffers to allow for long-distance transmissions.

  • Storage networking applications, including replication, can now traverse greater distances than before.

10.2 The Technology Behind IP Storage Data Replication

  • While the speed of light is 186,000 miles per second (300,000 Km/s), the refraction index for a typical single-mode fiber- optic cable reduces that speed to approximately 100,000 miles per second, or 100 miles per millisecond.

  • A typical 50-mile trip, 100 miles round way, incurs roughly one millisecond of "speed-of-light" or propagation delay.

  • Native Fibre Channel operates well within one millisecond of delay (100 miles), but many companies want the remote replication site located at farther distances.

  • Switches and routers that sit within the long-distance networking chain typically add anywhere from microseconds to milliseconds of delay, called switching or node delay.

  • Congestion delay occurs when network traffic must wait for a clear path during peak traffic times.

  • In LANs or MANs, capacity is easier and cheaper to install than in WANs, resulting in much fewer congestion delays.

  • Well-architected WANs can reach speeds close to the speed of light with very low packet loss.

  • IP networks also can handle a variety of network types including higher latency, congested networks.

10.3 Sizing the Link Bandwidth for Long-Distance Storage Networking Applications

  • Synchronous mirroring requires that each write be completed on both the primary and secondary arrays before the application continues.

  • Asynchronous mirroring separates the two arrays by an identified number of writes , allowing the application to continue without waiting for the second write.

  • Synchronous mirroring typically requires more bandwidth and lower latency.

  • To estimate link requirements, the peak hour changes in the data set must be quantified .

  • Approximately 10 percent is added for network protocol framing overhead.

  • Converting this estimate to Megabits per second delivers the simplest estimate of network bandwidth required.

  • Compression reduces the overall bandwidth requirements, allowing a smaller sized link to be used safely.

  • Statistical analysis of traffic patterns allows IT managers to assess cost and risk tradeoffs regarding the size of the link required.

10.4 Estimating the Effects of Network Latency on Applications Performance

  • Network latency is a combination of propagation delay, node delay, and congestion delay.

  • End systems can also add processing delay depending on the amount of time required to process the operation.

  • Good network design can minimize congestion and node delay, but laws of physics still govern propagation delay.

  • Credit spoofing and large buffers can offset latency effects by placing more data in the pipeline at any given moment.

  • Multiple, parallel sessions offset latency effects by reducing overall time for the transaction.

  • Propagation delay can be estimated at one millisecond per hundred miles. The mileage used should be the round-trip distance.

  • Node delay can be estimated based on the number of nodes and a conservative estimate of two milliseconds per node.

  • Congestion delay can be roughly estimated by dividing total node delay by the portion of the link available for storage traffic.

10.5 The Effects of TCP on Application Performance

  • IP is a layer-three protocol used for network transmissions.

  • TCP is used in conjunction with IP to guarantee end-to-end data integrity and automatic flow control.

  • TCP offload solutions remove previous limitations caused by TCP CPU consumption on end systems such as servers, arrays, and tape libraries.

  • TCP was designed for congested networks and therefore has conservative flow-control algorithms.

  • TCP flow control grants each user a modest initial rate that increases as acknowledgments are received from the destination system.

  • If a packet is dropped or contains errors, the sending system readjusts to a lower rate and starts again to modestly increase the flow.

  • In well-designed networks, TCP has little effect on performance.

  • In congested networks, TCP can cause overall performance to decrease by more than half.

  • The fast-start TCP specification aims to develop a more aggressive version of TCP.



IP Storage Networking Straight to the Core
IP Storage Networking: Straight to the Core
ISBN: 0321159608
EAN: 2147483647
Year: 2003
Pages: 108

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net