Section 4.8. Fibre Channel Management Concepts

   

4.8 Fibre Channel Management Concepts

The preceding sections have described the hardware elements that constitute SAN building blocks. A fair amount of software is involved in a SAN, particularly for management, data security, and SAN applications such as backup and restore. Sections 4.8.1 and 4.8.2 introduce some concepts essential to SAN management and data security. Indeed these concepts are the very heart of making SANs usable at all.

Now that single networks have multiple computers and multiple storage units, it is desirable to restrict views of particular computers (called hosts in Fibre Channel terms) to particular storage subsystems and particular units within those storage subsystems. This need is especially strong when a host is running Windows NT, because Windows insists on mounting any device that it discovers. UNIX systems on the other hand have a mount table and will mount only the devices explicitly mentioned in the mount table. Even with UNIX hosts, it is desirable to restrict access for security reasons and avoid possible data corruption. Access can be restricted by three different types of mapping or zoning functionality:

  1. Basic functionality implemented within the host, perhaps within the software driver for the HBA

  2. Functionality at the switch

  3. Functionality at the storage subsystem level

4.8.1 Zoning

The term zoning is associated intimately with switches. Zoning allows certain ports on a switch to have connections with only certain other ports. In some instances, zoning may also be used to screen certain FC control frames from propagating; for example, when a new storage device enters a loop, the LIP may be optionally screened from other storage devices.

Functionally, zoning allows a particular computer to have a direct connection with a particular storage subsystem. The drawback is that the computer is given complete SAN resources for that connection and will typically underutilize those resources. In particular, zoning does not allow sharing of the bandwidth or the storage subsystem resources.

Think of zoning as being analogous to IP port configuration on a firewall router. Another way to think of zoning is as the equivalent of setting up virtual LANs (VLANs) within an existing LAN environment. In a VLAN, only certain devices can see each other, even though additional devices may be present on the same physical LAN. Similarly, zoning restricts elements on a SAN (especially initiators) to knowledge (of) and access (to) only certain storage units, even though additional servers and storage units may exist on the same physical SAN.

Figure 4.9 shows a simple view of zoning. The SAN has three servers and three storage units. The different shading indicates different zones.

Figure 4.9. Zoning

graphics/04fig09.gif

LUNs can be shared via SAN file system sharing software. In this software, one or more servers act as metadata servers. Software is installed on the client computer (that wants to access a file on the SAN) and the metadata server. The metadata provides the client computer with the information that maps a logical offset in a file into a physical block number on a specified device. This knowledge allows the client computer then to directly access the file over the SAN without moving data through the server. If this is done cleverly enough, the regular file permissions on the client computer will still apply and the administrator will not need to do anything different in terms of file-sharing permissions and security.

One may define multiple zones and also have a single node participate in multiple zones; that is, some zones may overlap with others. Zoning may be accomplished in multiple ways:

  • Zoning by port number . The advantage here is flexibility. If a device at a port is replaced by another device, no reconfiguration is needed.

  • Zoning by World Wide Name . We can accomplish zoning by specifying which WWNs are considered to be in the same zone. Some WWNs may be defined to be in multiple zones. The advantage here is security, but at the cost of flexibility. Reconfiguration changes may need a server reboot.

  • Soft zoning . Soft zoning is accomplished by means of a name server (software) running within the switch. Soft zoning may be zoning by port number or by World Wide Name or a combination of both. The name server has a database that stores WWNs, port numbers , and zone IDs.

  • Hard zoning . Hard zoning is accomplished by means of a routing table stored within the switch. Hard zoning is based on WWNs and does not take port numbers into consideration at all.

4.8.2 LUN Masking

Storage resources may be "partitioned" into multiple subunits called logical unit numbers ( LUNs ). SCSI-2 supports up to 64 LUNs per target.

Functionally, LUN masking allows a particular computer to access a specific storage subunit on a specific storage system. More importantly, it is a way of ensuring that certain computers or servers do not have access to a particular LUN. LUN masking allows storage resources and, implicitly, network bandwidth to be shared, but the LUN itself is not shared. To allow true sharing of a single LUN by multiple computer systems, one needs an enhanced file system as described in Chapter 6. LUN masking is essential to guaranteeing data integrity in a SAN environment. Notice that LUN masking is an attempt to ensure only disk security and not necessarily file-level security. Additional software other than LUN masking is needed for the latter.

LUN masking does allow flexibility. LUNs can be easily reassigned to different computer systems. There are various ways of achieving LUN masking, each with its advantages and disadvantages. In general, the masking may be done

  • In hardware at the HBA

  • In hardware at the FC switch

  • In hardware at the FC storage device

  • In software at the host computer

These options are described in Sections 4.8.2.1 through 4.8.2.4.

4.8.2.1 LUN Masking in the HBA BIOS

In the HBA BIOS we accomplish LUN masking by masking away all LUNs that are not mapped in an HBA BIOS table. Thus the host (in which the HBA is installed) simply does not learn about the existence of the LUNs that it is not expected to see.

The drawback with this method is that it is voluntary and subject to the application of correct configurations. Any system that has an HBA that is incorrectly configured or does not implement this functionality can access LUNs that it is not supposed to access. Another problem with this approach is that dynamically managing and reconfiguring such a system can be problematic .

4.8.2.2 LUN Masking in Fibre Channel Switches

It is fairly easy for Fibre Channel switches to implement zoning. The incoming packet is either forwarded or not forwarded on the basis of the source and destination port addresses. LUN masking puts a little more overhead on Fibre Channel switches, requiring the switch to examine the first 64 bytes of each packet. This additional functionality is seen as a performance issue for most FC switches and therefore usually is not implemented.

4.8.2.3 LUN Masking in Fibre Channel Storage Controllers and Routers

This method of LUN masking is not voluntary for the attached hosts or subject to partial host participation. The LUN masking is implemented in the storage controller or router (firmware). Essentially the storage controller or router is configured to have a table of HBA WWNs mapped to the LUNs mapped within the controller that they are allowed to access. The big advantage here is that the configuration is independent of the configuration of intervening hubs or switches.

The drawback with this method is that the implementations are all vendor proprietary with no easy way for a single management console to reconfigure or even just query the settings, even though most vendors provide interfaces for managing the mappings.

Crossroads Systems, EMC, Dot Hill, and HP (with its StorageWorks offering) are some examples of vendors with such functionality. The vendors call this functionality by proprietary names ; for example, Crossroads calls it Access Controls , and HP's StorageWorks calls it Selective Storage Presentation .

4.8.2.4 LUN Masking via Host Software

LUN masking is accomplished in host software, typically by means of code in a device driver. The code must be in kernel mode because the whole idea is to prevent the operating system from claiming ownership of a LUN, and the operating system would do that before a user mode application got a chance to run.

This masking can be accomplished either as part of the base operating system or outside the base operating system. In the absence of a solution from Microsoft, some vendors have added code to their HBA driver to provide LUN-masking functionality. Typically the driver issues a Report LUNs command to each device on the bus, and before returning the list of LUNs to the Windows NT operating system, the driver culls LUNs from the list on the basis of some other data that it queries (such as registry information in Windows NT), thus "hiding" some LUNs from the Windows NT operating system.

The main problem with this method is that it is voluntary and hence subject to partial participation. This means that computers that do not have this custom HBA driver will not participate in the LUN masking. The solution also runs into problems with scaling for extremely large SANs because it is difficult to configure the many servers and their HBA drivers. The advantage is that this method makes it easy for LUNs to be shared by multiple servers.

Emulex, Dell, and JNI are some examples of vendors that offer such functionality.

4.8.2.5 LUN Masking and the Future of Windows NT

At press time, Microsoft is believed to be working on implementing LUN-masking capability in the port driver. However, this functionality is not present in Windows Server 2003. The advantage of having such functionality in the port driver is that the port driver is always loaded, so the window of opportunity for nonparticipation in the LUN masking is considerably reduced. The chances of having the wrong port driver loaded are considerably smaller than the chances of having both the wrong port driver and the wrong miniport driver loaded. Some preliminary indications are that any such implementation, were it indeed to happen, would allow an administrator to set the LUNs visible to a server. The administrator would be able to modify this list, including modifying this list in a nonpermanent way. In such cases a change could be made, but the change would no longer be applicable on the next reboot of the server.


   
Top


Inside Windows Storage
Inside Windows Storage: Server Storage Technologies for Windows 2000, Windows Server 2003 and Beyond
ISBN: 032112698X
EAN: 2147483647
Year: 2003
Pages: 111
Authors: Dilip C. Naik

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net