6.6 Wide Area Storage Networking

Traditionally, storage applications have assumed the close proximity of servers and disks. The transition from direct-attached parallel SCSI connectivity to networked Fibre Channel SANs extended the potential distance between servers and storage to 500 meters, and the introduction of IP storage potentially separates servers and storage configurations by thousands of miles.

The physical connectivity between storage resources is invisible to the upper-layer storage application. The effects of the physical connection are felt only if the SCSI operation is not completed within an expected interval or if the response time of the upper-layer application slows. A frame error on a Fibre Channel transaction, for example, may trigger a retransmission of a sequence of frames, but if the recovery is quick enough, no error is reported to the upper-layer SCSI interface. Similarly, TCP may be invoked to recover a dropped iSCSI PDU, but timely recovery will not disrupt a SCSI read or write operation.

Assuming an error-free link between two devices, two variables may adversely affect the transport: bandwidth and latency. The bandwidth must be sufficient to carry storage data at a performance level expected by the upper-layer application. High-definition video, for example, may require ~130MBps bandwidth for nondisruptive display. If sufficient bandwidth is not provided, the application may falter or fail.

The other major variable is latency. Latency in a link may be induced by network equipment or simply by speed-of-light propagation over distance. Manufacturers of network switch equipment have greatly reduced the amount of latency caused by buffering and routing frames from one port to another. Aside perhaps from quantum tunneling, however, no one has been able to overcome the physical laws of speed-of-light latency. The longer the distance between source and destination, the greater the propagation delay as signaling makes its way toward its target.

Bandwidth typically provided by carrier services is a fraction of the gigabit and multigigabit rates common to SANs. As shown in Table 6-2, available transmission rates range from T1 (1.544Mbps) to multigigabit speeds, with the most commonly deployed services for enterprise networks being T3 (45Mbps) to OC-12c (622Mbps).

Companies that own their own wide area links have a clear advantage in allocating bandwidth to suit specific applications. Companies that must rely on carriers pay substantially for higher bandwidth in the OC-12 (622Mbps) and OC-48 (2.5Gbps) range. In addition, "last mile" runs from the provider to the customer premises can pose additional expense and delays in deploying a WAN solution.

Fortunately, although SANs may run at gigabit or multigigabit speeds, most storage applications do not really require gigabit bandwidth. A tape backup stream, for example, may require only 12MBps 25MBps, although the SAN pipe provides ~200MBps. Similarly, on-line transaction processing applications may have very modest requirements for bandwidth and are not accelerated by virtue of traversing very fast links. The vendor adage that "speed sells" is true, although it is often sold into environments that do not fully utilize it.

Table 6-2. Typical Wide Area Network Transmission Rates (Source: R. Fardal)

Services (GigE switch or router on each end)

Link Speed (bits)

Payload Rate (bytes)

Packet over DS3 and SONET (North America)

   
 

DS1/T1 (not recommended for most storage applications)

1.544Mbps

0.2MBps

 

DS3/T3

45Mbps

4.9MBps

 

OC-3c ("c" indicates concatenated clear channel)

155Mbps

17MBps

 

OC-12c

622Mbps

67MBps

 

OC-48c

2,488Mbps (2.5Gbps)

270MBps

 

OC-192c

9,953Mbps (10Gbps)

1,078MBps

Packet over E3 and STM (Rest of World)

   
 

E1 (Not recommended for most storage applications)

2.048Mbps

0.2MBps

 

E3

34Mbps

3.7MBps

 

STM-1

155Mbps

17MBps

 

STM-4

622Mbps

67MBps

 

STM-16

2,488Mbps (2.5Gbps)

270MBps

 

STM-64

9,953Mbps (10Gbps)

1,078MBps

Gigabit Ethernet MAN and WAN Services

   
 

Fractional GigE over GigE Ports (Max rate is user-configurable)

Up to 1Gbps

Up to 108MBps

 

Gigabit Ethernet

1Gpbs

108MBps

 

10 Gigabit Ethernet (MAN/WAN interswitch links)

Up to 10Gbps

1,078MBps

This fact is borne out by storage over wide area connections. A T3, for example, supplies 45Mbps of bandwidth, or a payload of approximately 5MBps. That is probably the lower limit of storage over wide area applications, but it is still sufficient for incremental tape backups over distance. Low-volume asynchronous data replication may also survive at T3 speeds or at OC-3c's delivery of approximately 17MBps.

The calculation of throughput of data volume versus bandwidth is fairly straightforward. As shown in Table 6-3, transferring 70 gigabytes of storage data each hour would require an OC-3c link. A terabyte of storage data would therefore take approximately 15 hours over an OC-3 link. Similarly, it would take a little more than 2 days (over, say, a weekend) to back up a terabyte of storage over a T3 link.

These calculations are predicated on the ability of the storage equipment and IP SAN interconnection to perform at the available bandwidth rate, including wire-speed Gigabit Ethernet. Therefore, the iFCP or iSCSI protocol conversion needs to scale from slower T3 links to full gigabit speeds if you want to avoid having to calculate switch latency into the equation. In addition, for T3 and T1 links it is highly desirable to have rate limiting options to pace Fibre Channel-originated traffic so that it does not flood a slower WAN link.

Table 6-3. Calculating Data Throughput Based on Available Bandwidth (Source: R. Fardal)

Link

Mbps

MBps

MB/minute

MB/hour

GB/hour

T1/DS1

1.544

0.193

11.58

694.8

0.6948

T3/DS3

45

5.625

337.5

20,250

20.25

OC-3c

155

19.375

1,162.5

69,750

69.75

OC-12c

620

77.5

4,650

279,000

279

OC-48c

2,480

310

18,600

1,116,000

1,116

Fast Ethernet

100

12.5

750

45,000

45

Gigabit Ethernet

1,000

125

7,500

450,000

450

Megabits per second divided by 8 for megabytes per second

MBps per second times 60 for megabytes per minute

MB per minute times 60 for MB per hour

MB per hour divided by 1,000 for gigabytes per hour

Bandwidth is only part of the SAN/WAN relationship. Speed-of-light latency may have a more profound effect, especially over intercontinental distances, such as backing up corporate data from Los Angeles to Tokyo (more than 5,000 miles in each direction). Although the speed of light in a vacuum is approximately 186,000 miles per second, transmission through optical cabling reduces speed-of-light propagation to about 100,000 miles per second, or 100 miles per millisecond. Transmission between two points 100 miles apart would therefore incur a millisecond of delay in each direction, or 2 milliseconds round trip. As shown in Table 6-4, latency increases directly in proportion to distance.

These latency figures assume minimal contribution by optical switches and IP routers. State of the art wide area network equipment typically contributes less than 200 microseconds of latency, and so it has minimal impact on latency calculations. Substandard equipment, however, may affect performance, so it is advisable to monitor round-trip latency across the WAN service in order to properly calculate the impact of total latency on a proposed storage application.

Latency cannot be resolved by throwing more bandwidth on the link. Whether the transmission speed is T3 or OC-48, speed-of-light latency over a given distance is the same. The implication for storage applications is that delay-sensitive applications, such as synchronous disk mirroring, may not be able to tolerate extremely long links. Streaming applications such as tape backup or content distribution, however, are very tolerant of latency.

Table 6-4. Speed-of-Light Latency Increases at Roughly 1 Millisecond per 100 Miles

Point-to-point distance (km)

893

1,786

2,679

3,572

4,465

5,357

6,250

7,143

Point-to-point distance (miles)

555

1,110

1,664

2,219

2,774

3,329

3,884

4,439

Latency each way (ms)

5

10

15

20

25

30

35

40

Round trip latency (ms)

10

20

30

40

50

60

70

80

In the Promontory Project organized by Nishan Systems, storage applications were successfully run at gigabit speeds between Sunnyvale, California, and Newark, New Jersey. Qwest provided OC-48c links for this transcontinental IP SAN using Cisco IP routers and Ciena optical switches. As testimony to the efficiency of current wide area network equipment, packet loss through this configuration was zero for the several months that storage data was run. A combination of iSCSI and iFCP traffic, as well as conventional data pumping tools such as Intel IOmeter, was used to support tape backup and asynchronous data replication. For this technology showcase, sustained transmission of more than 100MBps in each direction was enabled by enormous buffering on the IP storage switches (256MB per long haul port).

Port buffering cannot overturn the laws of physics, but it does compensate for the effects of latency over long distances. The greater the port buffering, the greater the number of credits that can be issued for any transaction. Instead of issuing a handful of credits and then waiting tens of milliseconds for additional credits to be issued from the receiving side, an IP storage switch can issue hundreds of credits at once. The sending IP storage switch can thus stream frames across the wide area link, filling the WAN pipe at the same time that new credits make their way back from the destination.

Utilization of the WAN link is also enhanced by increasing the number of outstanding I/Os that an initiator can manage. Concurrent I/Os minimize the bottleneck at the host processing side of the link, allowing more data to be sent in less time.



Designing Storage Area Networks(c) A Practical Reference for Implementing Fibre Channel and IP SANs
Designing Storage Area Networks: A Practical Reference for Implementing Fibre Channel and IP SANs (2nd Edition)
ISBN: 0321136500
EAN: 2147483647
Year: 2003
Pages: 171
Authors: Tom Clark

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net