3.2 Storage Network Management


Because networked storage infrastructures are required for optimized storage deployments, the corresponding storage network management, or SAN management, takes on critical importance. Today, customers have a choice of SAN management software that can come directly from SAN switch vendors, array vendors, server vendors , or third-party software companies that integrate directly with SAN hardware.

Storage networking implementers will benefit from careful attention to this decision. While SAN management software from equipment vendors typically offer greater functionality and features, a third-party SAN management solution can provide configuration tools and diagnostics across multivendor solutions.

Most SAN management applications operate with an independent set of interfaces that drive control through that application. Efforts are underway in industry organizations like the Storage Networking Industry Association (SNIA) to standardize on common methods for storage network management to facilitate greater interoperability between applications. Ultimately, this will provide user flexibility to use multiple applications for SAN management and the ability to more easily shift between applications.

These standardization efforts fall underneath the overall framework of Web-based Enterprise Management, or WBEM. Within this framework, the Common Information Model (CIM) establishes a standardized means to organize storage network “ related information, such as product information characteristics of a SAN switch or disk array. Within these overarching framework models are a set of specific means to exchange information between devices and applications, such as Extensible Markup Language (XML), Hypertext Transfer Protocol (HTTP), and Simple Network Management Protocol (SNMP).

3.2.1 Discovery

The first step towards effective SAN management begins with device discovery. The basic process involves the identification of storage network devices within a storage fabric. For end devices such as HBAs or disk arrays and tape libraries, the initial connectivity and boot process establishes the login to the SAN fabric, typically through a SAN switch. The SAN switches become the initial repository of device information and can then share this data with SAN switch management applications or third-party applications.

Typically, all devices within a SAN have an Ethernet/IP interface dedicated to management. Each device has a specific IP address and communications to other devices or to centralized management agents via an SNMP using Management Information Base (MIB). MIBs are basic frameworks that allow applications and devices to share device-specific information. There are standard MIBs, such as MIB-II, with information that pertains to all devices, such as interface status for a specific port. Hardware vendors also typically provide vendor-specific MIBs that cover unique product features.

Using a combination of information from SAN switches, which have device information through the login process, and direct access to devices through the Ethernet and IP interfaces using SNMP, management applications have a wide array of information to provide to administrators on the general status of devices within the storage network. From this initial discovery process, more sophisticated management, such as manipulation of the storage network, can occur.

3.2.2 Zoning and Configuration

Storage networks create connections between multiple servers and storage devices using a variety of interconnect mechanisms from a single switch or hub to a complex mesh of switches that provide redundancy for high availability. The universal connectivity of storage devices to a common playing field provides tremendous flexibility to architect storage solutions. However, having that connectivity doesn't necessarily mean that one would want every storage device to be able to see every other storage device.

Managed communication between storage devices helps administrators balance the agility of universal accessibility with the business needs of resource allocation, segmentation, security, and controlled access. This managed communication begins with a process of zoning and configuration.

Let's use a simple example of a Windows server and a UNIX server on the same storage network, with two separate disk units (JBOD-A and JBOD-B ”each with two individual disks) and one tape library. A typical zoning and configuration application is shown in Figure 3-4. On the right side of the diagram is a list of devices, including the individual disks, tape library, and servers (identified by their HBA interfaces). On the left side of the diagram is a list of zones. Placing devices in a particular zone ensures that only other devices within that zone can "see" each other. SAN switches enforce the zoning through different mechanisms based on the worldwide name (WWN) of the storage device or on the port of the switch to which it is attached. For more detail on zoning, see Section 3.2.3, "SAN Management Guidance: Hard and Soft Zoning."

Figure 3-4. Typical storage networking zone configuration.

graphics/03fig04.jpg

In this example, the administrator has separated the disk units for the Windows and UNIX hosts . This avoids any conflicts of one operating system trying to initialize all of the visible storage. However, the tape library has been placed in both zones. To avoid potential conflicts, this configuration would need to be accompanied by a backup software application that can manage the access to the tape library, ensuring access by only one host at time. Sharing the tape library allows its cost to be distributed across a greater number of servers, thereby servicing more direct backups .

3.2.3 SAN Management Guidance: Hard and Soft Zoning

Zoning applications typically operate in two types of device modes: hard zoning and soft zoning. Hard zoning, also referred to as port-based zoning, means that all of the devices connected to a single port remain mapped to that port. For example, three drives on a Fibre Channel arbitrated loop may be allocated via a specific port to Zone A. If another drive is assigned to that loop, and it attaches through the same specified port, it would automatically appear in Zone A.

Another zoning mechanism, soft zoning, or WWN zoning, uses the unique Fibre Channel address of the device. In IP or iSCSI terms, the device has a Worldwide Unique Identifier (WWUI). With soft zoning, devices may be moved and interconnected through different SAN switches but will remain in the same zone.

Hard zoning offers more physically oriented security, while soft zoning offers more flexibility through software-enforced zoning. For very large configurations, customers will likely benefit from the intelligence of soft zoning coupled with the appropriate security and authentication mechanisms.

3.2.4 SAN Topologies

In addition to zoning devices in a SAN, some applications offer the ability to visualize the SAN in a topology view. A sample SAN topology is shown in Figure 3-5.

Figure 3-5. Topology view of a storage area network.

graphics/03fig05.jpg

SAN topology views can serve as effective management utilities, allowing administrators to quickly see the entire storage network and to drill down to device levels. In some cases, it may be easier for administrators to assign and allocate storage capacity using topology managers.

SAN topology views also provide more visibility to the network connectivity of SANs. While a zoning and configuration tool helps clarify the communication relationship between storage devices, topology managers help clarify the communication means between storage devices. Specifically , topology managers can show redundant connections between switches, redundant switches, and the available paths that link one storage device to another.

3.2.5 Monitoring

Monitoring allows storage administrators to keep the pulse of the storage network. This includes the general health of the SAN hardware, data activity levels, and configuration changes.

Event and error tracking is used to keep logs of the activity within a storage network. SAN management applications track events such as device additions, zone changes, and network connectivity changes. These logs keep detailed records of SAN activity and can serve as useful tools if problems occur. With hundreds or thousands of SAN configuration operations taking place on any given day, the ability to go back in time to analyze events is invaluable. Similarly, error logs help diagnose the root cause of potential failures within a SAN. Minor errors, such as an unsuccessful first login to a SAN switch may not mean much as a stand-alone event, but a pattern of such errors can help administrators rapidly analyze and repair potential problems in the SAN.

Storage administrators can use alarms and traps to help effectively monitor the storage network. A trap is a set threshold for a certain variable that triggers notification when reached. For example, a trap can be set for a SAN switch to send an alarm when the temperature of the box reaches a "red zone." Such traps are helpful because a problem may not be directly related to failures within the equipment. For example, a broken fan would automatically send an alarm, but if someone placed a large box next to a data center rack, prohibiting airflow, a temperature gauge would be the only mechanism to ensure preemptive awareness of overheating .

SAN management traps typically integrate with large enterprise management systems via SNMP. By tying into these larger software systems, SAN alarms can be directed to the appropriate support staff through existing email, paging, or telephone tracking systems.

Perhaps the most important monitoring component for progressive deployment of storage networking infrastructures is performance. Performance monitoring can be tricky. Ultimately, organizations measure performance of applications, not storage throughput. However, the underlying performance of the SAN helps enable optimized application performance.

Since storage networks carry the storage traffic without much interpretation of the data, SAN performance metrics focus on link utilization, or more specifically, how much available bandwidth is being used for any given connection. An overengineered SAN with low utilization means excess infrastructure and high costs. An overworked SAN with high utilization means more potential for congestion and service outages for storage devices.

3.2.6 SAN GUIDANCE: Protocol Conversion

As outlined in Chapter 2, three transport protocols exist for IP Storage: iSCSI, iFCP, and FCIP. For iSCSI to Fibre Channel conversion, a key part of storage network management, IT professionals can use the framework in evaluating product capabilities. Not all IP storage switches and gateways provide full conversion capabilities. For example, some products may support iSCSI servers to Fibre Channel storage, but not vice versa.

Additionally, storage-specific features for applications like mirroring or advanced zoning capabilities between IP and FC SANs may affect protocol conversion capabilities. This type of connectivity, items 2 and 3 in Figure 3-6, should be specifically examined in multiprotocol environments.

Figure 3-6. Type of IP and FCP conversion.

graphics/03fig06.jpg

For Fibre Channel to Fibre Channel interconnect across an IP network, such as item 4 illustrates, the iFCP and FCIP protocols are more suitable because of their ability to retain more of the FCP layer.

3.2.7 Hard Management

Appropriately titled by its frequency of implementation, as opposed to the implementation challenge, is the physical documentation of SANs. No matter how much software is deployed or how detailed the visibility of the applications, nothing protects a business better than clear, easy-to-follow policies and procedures. Obviously, this is more easily said than done, and often the primary challenge in disaster recovery scenarios lies in understanding the design guidelines used by SAN architects. If these architects leave the company or are unavailable, the inherent recovery challenges increase dramatically.

Documenting SAN deployments, including a set of how-to instructions for amateur users, serves as both a self-check on implementation and an invaluable resource for those who may need to troubleshoot an installation sight unseen. At a minimum, items for this documentation would include applications used, vendor support contacts, passwords, authorized company personnel and contact information, backup and recovery procedures, and storage allocation mechanisms.

In conjunction with documenting storage administration policies and procedures, proper cabling and labeling of storage configurations can dramatically save time and effort in personnel costs ”one of the largest components of the operational IT budget.



IP Storage Networking Straight to the Core
IP Storage Networking: Straight to the Core
ISBN: 0321159608
EAN: 2147483647
Year: 2003
Pages: 108

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net