Gigabit Ethernet Gigabit Ethernet is a data-link transport that borrows from both Fibre Channel and conventional 802.3 Ethernet. Ethernet framing is used to transport TCP/IP data over Gigabit Ethernet networks. 802.1Q VLAN tagging allows segregation of devices on the SAN. 802.1p/Q frame prioritization enables mission-critical traffic to be assigned one of eight levels of priority for SAN transport. 802.3x flow control provides reliable transport of storage data over connectionless protocols such as UDP/IP. 802.3ad link aggregation allows scalability of IP-based SANs with no loss in performance. Gigabit Ethernet cabling includes Category 5 UTP as well as standard multimode and single-mode fiber-optic cabling. TCP/IP IP is a layer 3 network protocol that sits on top of the data-link and physical layers. IP routing enables communication between different networks or different segments of a single network (or both). IP addressing is based on a 32-bit address field, commonly represented in a dotted decimal notation. An IP address has a network component and a host component. Subnet masking is used to divide a single IP address range into small IP segments. Classless Inter-Domain Routing (CIDR) was created to overcome IP address starvation and revises the traditional IP class system. Address Resolution Protocol (ARP) is used to discover the MAC address associated with a particular IP address. Open Shortest Path First (OSPF) is a link-state protocol that calculates optimum routes by availability, bandwidth, traffic load, and other factors relating to links between routers. Convergence is the time required to achieve network stability after a network change has occurred. TCP initializes a connection between two hosts before data is transferred. TCP provides error recovery and reordering of out-of-sequence segments. The TCP/IP stack is CPU-intensive for the host system. TCP sliding window refers to the transmission of multiple TCP segments before acknowledgments are received. TCP slow start prevents TCP from streaming segments onto an unreliable network. TCP recovery mechanisms provide for retransmission of lost segments, out-of-order delivery, and discard of duplicate segments by the receiver. A stable network infrastructure reduces the latency incurred by TCP recovery processes. For IP storage applications, network conditions that force TCP recovery mechanisms should be minimized. Native IP Storage Protocols iFCP is a gateway-to-gateway protocol for integrating Fibre Channel end devices into an IP storage network. iFCP storage switches may replace Fibre Channel switches and provide direct connection of Fibre Channel end devices. iFCP provides fabric service emulation to translate between FC and IP domains. iFCP supports multiple TCP connections for concurrent storage transactions. Security for iFCP implementations can be provided by zoning, using public/private keys, and using IPSec. iSCSI follows the SCSI client/server model. iSCSI assumes that both initiators and targets have native iSCSI interfaces. iSCSI commands and data are issued via protocol data units (PDUs). PDUs are used to encapsulate SCSI command descriptor blocks (CDBs). iSCSI error handling includes recovery of individual PDUs, reestablishment of TCP connections, and reestablishment of iSCSI sessions. Security can be provided by Kerberos, public key, IPSec, or other methods. To provide adequate performance for SANs, iSCSI adapters require TCP off-load engines. Discovery in IP SANs The Service Locator Protocol (SLP) defines user agents (UAs), service agents (SAs), and directory agents (DAs). SLP service agents advertise resources by IP address, URL, or both. An SLP service agent can reside in an iSCSI storage array, switch, or server. SLP is intended for small and medium IP SAN configurations. iSNS combines Fibre Channel and DNS functions to provide device discovery for iFCP and iSCSI devices. iSNS can be centralized in servers or distributed in IP storage switches. iSNS provides device registration, zoning, and state change management for IP SANs. iSNS zones are called discovery domains (DDs). Discovery domains can be organized into optional groupings known as discovery domain sets (DDS). An iSNS server is managed by an external management workstation. iSNS can scale to large enterprise-class storage networks. The iSNS server can host security services such as public key distribution. Authentication keys can be distributed during iSNS login. Public key security is scalable to large IP SANs. Quality of Service for IP SANs Quality of service includes traffic prioritization, bandwidth allocation, and timely delivery guarantees. Quality of service mechanisms can be leveraged for IP storage networking. The IEEE 802.1p standard is a link layer method for assigning priority to a frame. 802.1p is enforced through buffer queuing in IP switches. DiffServ enables policy-based traffic prioritization. Resource Reservation Protocol (RSVP) establishes guaranteed bandwidth for data flows. Multi-Protocol Label Switching (MPLS) uses frame labels to expedite traffic through the network. A label can be a header inserted in front of a standard IP datagram. Security for IP SANs IEEE 802.1Q VLAN tagging enables separation of storage traffic through a switched IP network. VLAN tagging is a standard feature of most Gigabit Ethernet switches. IP Security (IPSec) includes authentication of end devices and encryption of user data. Authentication and encryption rely on keys. A key is a variable value applied in an encryption algorithm that processes blocks of data into encrypted output. The Data Encryption Standard (DES) provides a 56-bit key. Triple DES uses three separate keys to encrypt data blocks. Public Key Infrastructure (PKI) provides public key/private key pairing to facilitate key distribution over untrusted networks. IPSec can be implemented for IP SANs via firewalls, within storage switches or gateways, or on individual storage devices. Shared network segments require some form of IPSec solution to ensure data security. Wide Area SANs IP storage enables storage data to be transported over long distances. Bandwidth and latency are the main variables for wide area SAN design. Wide area carrier services range from T1 (1.544Mbps) to OC-192c (10Gbps). T3 (45Mbps) is the recommended minimum bandwidth for storage applications. Latency is the result of speed-of-light propagation and network equipment processing. Current network equipment typically incurs less than 200 microseconds of latency. Speed-of-light latency is approximately 1 millisecond for every 100 miles of transit. Port buffering helps to offset the effect of speed-of-light latency. Concurrent I/Os increase bandwidth utilization over distance. |