Storage


IT organizations have been wrestling over whether the advantages of implementing a SAN solution justify the associated costs. Other organizations are exploring new storage options and whether SAN really has advantages over traditional storage options, such as Network Attached Storage (NAS). In this brief historical overview, you will be introduced to the basic purpose and function of a SAN and will examine its role in modern network environments. You will also see how SANs meet the network storage needs of today's organizations.

When the layers of even the most complex technologies are stripped back, you will likely find that they are rooted in common rudimentary principles. This is certainly true of storage-area networks (SANs). Behind the acronyms and fancy terminology lies a technology designed to provide a way of offering one of the oldest network services of providing data to users who are requesting it.

In very basic terms, a SAN can be anything from a pair of servers on a network that access a central pool of storage devices, as shown in Figure 3-3, to more than a thousand servers accessing multimillions of megabytes of storage. Theoretically, a SAN can be thought of as a separate network of storage devices that are physically removed from but still connected to the network, as shown in Figure 3-4. SANs evolved from the concept of taking storage devicesand, therefore, storage trafficfrom the local-area network (LAN) and creating a separate back-end network designed specifically for data.

Figure 3-3. Servers Accessing a Central Pool of Storage Devices


Figure 3-4. SAN: A Physically Separate Network Attached to a LAN


A Brief History of Storage

SANs represent the latest of an emerging sequence of phases in data storage technology. In this section, you will take a look at the evolution of Direct Attached Storage, NAS, and SAN. Just keep in mind that, regardless of the complexity, one basic phenomenon is occurring: clients acquiring data from a central repository. This evolution has been driven partly by the changing ways in which users use technology, and partly by the exponential increase in the volume of data that users need to store. It has also been driven by new technologies that enable users to store and manage data in a more effective manner.

When mainframes were the dominant computing technology, data was stored physically separate from the actual processing unit but was still accessible only through the processing units. As personal computing-based servers proliferated, storage devices migrated to the interior of the devices or in external boxes that were connected directly to the system. Each of these approaches was valid in its time, but with users' growing need to store increasing volumes of data and make that data more accessible, other alternatives were needed. Enter network storage.

Network storage is a generic term used to describe network-based data storage, but many technologies within it make the science happen. The next section covers the evolution of network storage.

Direct Attached Storage

Traditionally, on client/server systems, data has been stored on devices that are either inside or directly attached to the server. Simply stated, Direct Attached Storage (DAS) refers to storage devices connected to a server. All information coming into or going out of DAS must go through the server, so heavy access to DAS can cause servers to slow down, as shown in Figure 3-5.

Figure 3-5. Direct Attached Storage Example


In DAS, the server acts as a gateway to the stored data. Next in the evolutionary chain came NAS, which removed the storage devices from behind the server and connected them directly to the network.

Network Attached Storage

Network Attached Storage (NAS) is a data-storage mechanism that uses special devices connected directly to the network media. These devices are assigned an Internet Protocol (IP) address and can then be accessed by clients using a server that acts as a gateway to the data or, in some cases, allows the device to be accessed directly by the clients without an intermediary, as shown in Figure 3-6.

Figure 3-6. NAS


The benefit of the NAS structure is that, in an environment with many servers running different operating systems, storage of data can be centralized, as can the security, management, and backup of the data. An increasing number of businesses are already using NAS technology, if only with devices such as CD-ROM towers (standalone boxes that contain multiple CD-ROM drives) that are connected directly to the network.

Some of the advantages of NAS include scalability and fault tolerance. In a DAS environment, when a server goes down, the data that the server holds is no longer available. With NAS, the data is still available on the network and is accessible by clients.

A primary means of providing fault-tolerant technology is Redundant Array of Independent (or Inexpensive) Disks (RAID), which uses two or more drives working together. RAID disk drives are often used for servers; however, their use in personal computers (PCs) is limited. RAID can also be used to ensure that the NAS device does not become a single point of failure.

Storage-Area Networking

Storage-area networking (SAN) takes the principle one step further by allowing storage devices to exist on their own separate network and communicate directly with each other over very fast media. Users can gain access to these storage devices through server systems, which are connected to both the local-area network (LAN) and the SAN, as shown in Figure 3-7.

Figure 3-7. A SAN with Interconnected Switches


This is in contrast to the use of a traditional LAN for providing a connection for server-based storage, a strategy that limits overall network bandwidth. SANs address the bandwidth bottlenecks associated with LAN-based server storage and the scalability limitations found with Small Computer Systems Interface (SCSI) bus-based implementations. SANs provide modular scalability, high availability, increased fault tolerance, and centralized storage management. These advantages have led to an increase in the popularity of SANs because they are better suited to address the data-storage needs of today's data-intensive network environments.

Business Drivers Creating a Demand for SAN

Several business drivers are creating the demand and popularity for SANs:

  • Regulations Recent national disasters have driven regulatory authorities to mandate new standards for disaster recovery and business continuance across many sectors, including financial and banking, insurance, health care, and government entities. As an example, the Federal Reserve and the Securities and Exchange Commission (SEC) recently released a document titled Interagency Paper on Sound Practices to Strengthen the Resilience of the U.S. Financial System, which outlines objectives for rapid recovery and timely resumption of critical operations after a disaster. Similar regulations addressing specific requirements for health care, life sciences, and government have been issued or are under consideration.

  • Cost Factors include the cost of downtime (millions of dollars per hour for some institutions), more efficient use of storage resources, and reduced operational expenses.

  • Competition With competitive pressures created by industry deregulation and globalization, many businesses are now being judged on their business continuance plans more closely than ever. Many customers being courted are requesting documentation detailing disaster-recovery plans before they select providers or even business partners. Being in a position to recover quickly from an unplanned outage or from data corruption can be a vital competitive differentiator in today's marketplace. This rapid recovery capability will also help maintain customer and partner relationships if such an event does occur.

The advantages of SANs are numerous, but perhaps one of the best examples is that of the serverless backup (also commonly referred to as third-party copying). This system allows a disk storage device to copy data directly to a backup device across the high-speed links of the SAN without any intervention from a server. Data is kept on the SAN, which means that the transfer does not pollute the LAN, and the server-processing resources are still available to client systems.

SANs are most commonly implemented using a technology called Fibre Channel (FC). FC is a set of communication standards developed by the American National Standards Institute (ANSI). These standards define a high-performance data-communications technology that supports very fast data rates of more than 2 Gbps. FC can be used in a point-to-point configuration between two devices, in a ring type of model known as an arbitrated loop, and in a fabric model.

Devices on the SAN are normally connected through a special kind of switch called an FC switch, which performs basically the same function as a switch on an Ethernet network: It acts as a connectivity point for the devices. Because FC is a switched technology, it is capable of providing a dedicated path between the devices in the fabric so that they can use the entire bandwidth for the duration of the communication.

Regardless of whether the network-storage mechanism is DAS, NAS, or SAN, certain technologies are common. Examples of these technologies include SCSI and RAID.

For years, SCSI has been providing a high-speed, reliable method of data storage. Over the years, SCSI has evolved through many standards to the point that it is now the storage technology of choice. Related to but not reliant on SCSI is RAID. RAID is a series of standards that provide improved performance and fault tolerance for disk failures. Such protection is necessary because disks account for about 50 percent of all hardware device failures on server systems. As with SCSI, the technologies such as RAID used to implement data storage have evolved, developed, and matured over the years.

The storage devices are connected to the FC switch using either multimode or single-mode fiber-optic cable. Multimode cable is used for short distances (up to 2 km), and single-mode cable is used for longer distances. In the storage devices themselves, special FC interfaces provide the connectivity points. These interfaces can take the form of built-in adapters, which are commonly found in storage subsystems designed for SANs, or can be interface cards much like a network card, which are installed into server systems.

So how do you determine whether you should be moving toward a SAN? If you need to centralize or streamline your data storage, a SAN might be right for you. Of course, there is one barrier between you and storage heaven: money. SANs remain the domain of big business because the price tag of SAN equipment is likely to remain at a level outside the reach of small or even medium-size businesses. However; if prices fall significantly, SANs will find their way into organizations of smaller sizes.

Evolution of SAN

The evolution of SAN is best described in three phases, each of which has its own features and benefits of configuring, consolidating, and evolution:

  • Phase I Configures SANs into homogeneous islands, as shown in Figure 3-8. Each of the storage networks is segmented based on some given criteria, such as workgroup, geography, or product.

    Figure 3-8. Isolated Islands of Storage Whose Segmentation Is Based on Organization

  • Phase II Consolidates these storage networks and virtualizes the storage so that storage is shared or pooled among the various work groups. Technologies such as virtual SANs (similar to virtual LANS [VLANs]) are used to provide security and scalability, while reducing total cost of capital. This is often called a multilayer SAN (see Figure 3-9).

    Figure 3-9. Multilayer SAN

  • Phase III Involves adding features such as dynamic provisioning, LAN free backup, and data mobility to the SAN. This avoids having to deploy a separate infrastructure per application environment or department, creating one physical infrastructure with many logical infrastructures. Thus, there is improved use of resources. On-demand provisioning allows networking, storage, and server components to be allocated quickly and seamlessly. This also results in facilities improvements because of improved density and lower power and cabling requirements.

    Phase III is often referred to as a multilayer storage utility because this network is more seamless and totally integrated so that it appears as one entity or utility to which you can just plug in. This is analogous to receiving power to your home. Many components are involved in getting power to your home, including transformers, generators, Automatic Transfer Switches, and so on. However, from the enterprise's perspective, only the utility handles the power. Everything behind the outlets is handled seamlessly by the utility provider. Be it water, cable television, power, or "storage," the enterprise sees each as a utility, even though many components are involved in delivering them. Figure 3-10 shows a multilayer storage utility.

    Figure 3-10. Multilayer Storage Utility: One Seamless, Integrated System

The three major SAN protocols include FC, ESCON, and FICON, and are covered in the following section.

Fibre Channel

FC is a layered network protocol suite developed by ANSI and typically used for networking between host servers and storage devices, and between storage devices. Transfer speeds come in three rates: 1.0625 Gbps, 2.125 Gbps, and 4 Gbps. With single-mode fiber connections, FC has a maximum distance of about 10 km (6.2 miles).

The primary problem with transparently extending FC over long distances stems from its flow-control mechanism and its potential effect on an application's effective input/output (IO) performance. To ensure that input buffers do not get overrun and start dropping FC frames, a system of buffer-to-buffer credits provides a throttling mechanism to the transmitting storage or host devices to slow the flow of frames. The general principle is that one buffer-to-buffer credit is required for every 2 km (1.2 miles) to sustain 1 Gbps of bandwidth, and one buffer-to-buffer credit is required for every 1 km (0.6 miles) between two interfaces on a link for 2 Gbps. These numbers are derived using full-size FC frames (2148 bytes); if using smaller frames, the number of buffer credits required significantly increases. Without SAN extension methods in place, a typical FC fabric cannot exceed 10 km (6.2 miles). To achieve greater distances with FC SAN extensions, SAN switches are used to provide additional inline buffer credits. These credits are required because most storage devices support very few credits (less than 10) of their own, thereby limiting the capability to directly extend a storage array.

Enterprise Systems Connection

Enterprise Systems Connection (ESCON) is a 200-Mbps unidirectional serial bit transmission protocol used to dynamically connect IBM or IBM-compatible mainframes with their various control units. ESCON provides nonblocking access through either point-to-point connections or high-speed switches called ESCON directors. ESCON performance is seriously affected if the distance spanned is greater than 8 km (5 miles).

Fiber Connection

Fiber Connection (FICON) is the next-generation bidirectional channel protocol used to connect mainframes directly with control units or ESCON aggregation switches, such as ESCON directors with a bridge card. FICON runs over FC at a data rate of 1.062 Gbps by using its multiplexing capabilities. One of the main advantages of FICON is its performance stability over distances. FICON can reach a distance of 100 km (62 miles) before experiencing any significant drop in data throughput.

  • Present and future demand The type and quantity (density) of SAN extension protocols to be transported, as well as specific traffic patterns and restoration techniques, need to be considered. The type and density requirements help determine the technology options and specific products that should be implemented. Growth should be factored into the initial design to ensure a cost-effective upgrade path.

  • Distances Because of the strict latency requirements of SAN applications, especially those found in synchronous environments, performance could be severely affected by the type of SAN extension technology implemented. Table 3-2 provides some guidance on distance restrictions and other considerations for each technology option.

    Table 3-2. SAN Extension Options
     

    FC over Dark Fiber

    FC over CWDM

    DWDM

    SONET/SDH

    FCIP

    SAN protocols supported

    FC

    FC

    FC, FICON, ESCON, IBM Sysplex Timer, IBM Coupling Facility

    FC, FICON

    FC over IP

    SAN distances supported (FC/FCIP only for comparative purposes)

    90 km (56 miles)[*]

    60 to 66 km (37 to 41 miles)[**]

    Up to 200 km (124 miles)[***]

    2800 km (1740 miles) with buffer credit support

    Distance limitation dependent on the latency tolerance of the end application

    [**]Longest tested distance is 5800 km (3604 miles)

    SAN bandwidth options (per fiber pair)

    1-Gbps FC (1.0625 Gbps), 2-Gbps FC (2.125 Gbps), 4-Gbps FC

    1-Gbps FC (1.0625 Gbps), 2-Gbps FC (2.125 Gbps), up to 8 channels

    Up to 256 F C/FICON channels, up to 1280 ESCON channels, up to 32 channels at 10 Gbps

    1-Gbps FC (1.0625 Gbps), up to 32 channels with subrating, 2-Gbps FC (2.125 Gbps), up to 16 channels with subrating

    1-Gbps FC (1.0625 Gbps)

    Network-protection options

    FSPF, PortChannel, isolation with VSANs

    FSPF, PortChannel, isolation with VSANs

    Client, 1+1, y-cable, switch fabric protected, switch fabric protected trunk, and protection switch module, unprotected

    UPSR/SNCP, 2F and 4F BLSR/MS-SPR, PPMN, 1+1 APS/MSP, unprotected

    VRRP, redundant FCIP tunnels, FSPF, PortChannel, isolation with VSANs

    Other protocols supported

    CWDM filters also support GigE

    OC-3/12/48/192, STM-1/4/16/64, GigE, 10-Gigabit Fast Ethernet, D1 Video

    DS-1, DS-3, OC-3/12/48/192, E-1, E-3, E-4, STM-1E, STM-1/4/16/64, 10/100-Mbps Ethernet, GigE


    [*] Assumes the use of CWDM SFPs, no filters

    [**] Assumes point-to-point configuration

    [***] Actual distances depend on the characteristics of fiber used.

  • Recovery objectives A business-continuity strategy can be implemented to reduce an organization's annual downtime and to reduce the potential costs and intangible issues associated with downtime. Recovery with local or remote tape backup could require days to implement, whereas geographically dispersed clusters with synchronous mirroring can result in recovery times measured in minutes. Ultimately, the business risks and costs of each solution have to be weighed to determine the appropriate recovery objective for each enterprise.

  • Original storage manufacturer certifications Manufacturers such as IBM, EMC, HP, and Hitachi Data Systems require rigorous testing and associated certifications for SAN extension technologies and for specific vendor products. Implementing a network containing elements without the proper certification can result in limited support from the manufacturer in the event of network problems.

Unlike ESCON, FICON supports data transfers and enables greater rates over longer distances. FICON uses a layer that is based on technology developed for FC and multiplexing technology, which allows small data transfers to be transmitted at the same time as larger ones. IBM first introduced the technology in 1998 on its G5 servers.

FICON can support multiple concurrent data transfers (up to 16 concurrent operations), as well as full-duplex channel operations (multiple simultaneous reads and writes), compared to the half-duplex operation of ESCON.

FICON is mapped over the FC-2 protocol layer (refer back to Table 3-1) in the FC protocol stack, in both 1-Gbps and 2-Gbps implementations. The FC standard uses the term Level instead of Layer because there is no direct relationship between the Open Systems Interconnection (OSI) layers of a protocol stack and the levels in the FC standard.

Table 3-1. FC Levels

Level

Functionality

FC-4

Mapping ATM, SCSI-3, IPI-3, HIPPI, SBCCS, FICON, and LE

FC-3

Common services

FC-2

Framing protocol

FC-1

Encode/decode (8B/10B)

FC-0

Physical


Within the FC standard, FICON is defined as a Level 4 protocol called SB-2, which is the generic terminology for the IBM single-byte command architecture for attached I/O devices. FICON and SB-2 are interchangeable terms; they are connectionless point-to-point or switched point-to-point FC topology.

FCIP

Finally, before delving into the SAN over MSPP, it is important to note that FC can be tunneled over an IP network known as FCIP, as shown in Figure 3-11. FC over IP (FCIP) is a protocol specification developed by the Internet Engineering Task Force (IETF) that allows a device to transparently tunnel FC frames over an IP network. An FCIP gateway or edge device attaches to an FC switch and provides an interface to the IP network. At the remote SAN island, another FCIP device receives incoming FCIP traffic and places FC frames back onto the SAN. FCIP devices provide FC expansion port connectivity, creating a single FC fabric.

Figure 3-11. FCIP: FC Tunneled over IP


FCIP moves encapsulated FC data through a "dumb" tunnel, essentially creating an extended routing system of FC switches. This protocol is best used in point-to-point connections between SANs because it cannot take advantage of routing or other IP management features. And because FCIP creates a single fabric, traffic flows could be disrupted if a storage switch goes down.

One of the primary advantages of FCIP for remote connectivity is its capability to extend distances using the Transmission Control Protocol/Internet Protocol (TCP/IP). However, distance achieved at the expense of performance is an unacceptable trade-off for IT organizations that demand full utilization of expensive wide-area network (WAN) bandwidth. IETF RFC 1323 adds Transmission Control Protocol (TCP) options for performance, including the capability to scale the standard TCP window size up to 1 GB. As the TCP window size widens, the sustained bandwidth rate across a long-haul (more latency) TCP connection increases. From early field trials, distances spanning more than 5806 km (3600 miles) were feasible for disk replication in asynchronous mode. Even greater transport distances are achievable. Theoretically, a 32-MB TCP window with a 1-Gbps bandwidth can be extended over 50,000 km (31,069 miles) with 256 ms of latency.

Another advantage of FCIP is the capability to use existing infrastructures that provide IP services. For IT organizations that are deploying routers for IP transport between their primary data centers and their disaster-recovery sites, and with quality of service (QoS) enabled, FCIP can be used for SAN extension applications. For larger IT organizations that have already invested in or are leasing SONET/Synchronous Digital Hierarchy (SDH) infrastructures, FCIP can provide the most flexibility in adding SAN extension services because no additional hardware is required.

For enterprises that are required to deploy SAN extensions across various remote offices with the central office (CO), a hub-and-spoke configuration of FCIP connections is also possible. In this manner, applications such as disk replication can be used between the disk arrays of each individual office and the CO's disk array, but not necessarily between the individual offices' disk arrays themselves. With this scenario, the most cost-effective method of deployment is to use FCIP along routers.

SAN over MSPP

FC technology has become the protocol of choice for the SAN environment. It has also become common as a service interface in metro DWDM networks, and it is considered one of the primary drivers in the DWDM market segment. However, the lack of dark fiber available for lease in the access portion of the network has left SAN managers searching for an affordable and realizable solution to their storage transport needs. Thus, service providers have an opportunity to generate revenue by efficiently connecting and transporting the user's data traffic via FC handoffs. Service providers must deploy metro transport equipment that will enable them to deliver these services cost-effectively and with the reliability required by their service-level agreements (SLAs). This growth mirrors the growth in Ethernet-based services and is expected to follow a similar path to adoptionthat is, a transport evolution in which TDM, Ethernet, and now FC move across the same infrastructure, meeting the needs of the enterprise end user without requiring a complete hardware upgrade of a service provider's existing infrastructure.

Consider a couple of the traditional FCIP over SONET configurations. Figure 3-12 shows a basic configuration, in which the Gigabit Ethernet (GigE) port of the IP Storage Services Module is connected directly to the GigE port of an MSPP. This scenario assumes that a dedicated GigE port is available on the MSPP. Another possible configuration is to include routers between the IP Storage Services Module and the MSPP, as shown in Figure 3-13. In this case, the MSPP might not necessarily have a GigE card, so a router is required to connect the GigE connection of the IP Storage Services Module to the MSPP.

Figure 3-12. IP Storage Services Module Connected Directly to an MSPP


Figure 3-13. IP Storage Services Module Connected Routers Interfaced to MSPP


MSPP with Integrated Storage Card

The storage card, such as is found in the Cisco ONS 15454 MSPP, is a single-slot card with multiple client ports, each supporting 1.0625- or 2.125-Gbps FC/FICON. It uses pluggable gigabit interface converter (GBIC) optical modules for the client interfaces, enabling greater user flexibility. The payload from a client interface is mapped directly to SONET/SDH payload through transparent generic framing procedure (GFP-T) encapsulation. This payload is then cross-connected to the system's optical trunk interfaces (up to OC-192) for transport, along with other services, to other network elements.

The new card fills the FC over SONET gaps in the transport category of the application. This allows MSPP manufacturers to provide 100 percent availability of the FC need, while also providing end-to-end coverage of data center and enterprise storage networking solutions across the metropolitan, regional, and wide area networks, as shown in Figure 3-14.

Figure 3-14. Integrated Storage Card within an MSPP


The storage interface card plugs into the existing MSPP chassis and is managed through the existing management system. Its introduction does not require a major investment in capital expenditures (CapEx) or operational expenditures (OpEx), but rather, an evolutionary extension of services. For the service provider, this creates an opportunity to further capture market and revenues from existing and often extensive MSPP installations. For the enterprise, this equals access to new storage over SONET/SDH services, enabling it to deploy needed SAN extensions and meet business-continuance objectives.

Storage Card Highlights

Consider the storage features of Cisco ONS 15454 MSPP:

  • It supports 1-Gbps and also 2-Gbps FC with low-latency GFP-T mapping, allowing customers to grow beyond 1-Gbps FC.

  • It supports FC over protected SONET/SDH transport networks in a single network element: 16 line-rate FC on a single shelf over a fully protected transport network, such as 4F bidirectional line-switched ring (BLSR) OC-192 and dual 2F-BLSR/unidirectional path-switched ring (UPSR) OC-192.

  • It lowers CapEx and OpEx costs by using existing infrastructure and management tools.

  • It increases the service-offering capabilities.

  • It does not require upgrade of costly components of the MSPP, such as the switch matrix of the network element.

SAN Management

Storage networking over the MSPP continues the simple, fast, easy approach introduced in implementing traditional services in the MSPP. The GUI applications greatly increase the speed of provisioning, testing, turn-up, and even troubleshooting aspects of storage over MSPP, and they reduce the need for an additional OSS to implement this service.




Building Multiservice Transport Networks
Building Multiservice Transport Networks
ISBN: 1587052202
EAN: 2147483647
Year: 2004
Pages: 140

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net