13.1 Standardization

Standardization is the foundation of the open systems environments that have displaced the monolithic and proprietary computing technologies of the 1960s and 1970s. In that former state of single-vendor monopoly, everything from networking protocols to cabling schemes was a closed system, forcing customers to depend on a single supplier that set its own, often exorbitant prices. This gave vendors tremendous power over customers, to the extent that IT administrators would often be risking their careers if they sought alternative solutions or suppliers. The monopoly over computer systems and networking was eventually broken by technologists who produced architectures and protocols that could be implemented by multiple vendors, thus fostering competition to deliver similar solutions more economically. By the mid-1980s, the standardization of Ethernet, TCP/IP, SCSI, and other technologies, and the decentralization of computing resources through introduction of networked computing, validated the open systems initiative and made standards compliance a prerequisite for technology acquisition. Today, a customer request for quotation (RFQ) is typically accompanied by a laundry list of compliance checkboxes for IETF, ANSI, or IEEE standards, which must be fulfilled before a vendor's products will be considered.

The scope of standards-based technologies is currently limited to the lower levels of computing systems. Cable plants, interconnections, transport protocols, and management protocols may be defined by standards, but processors, operating systems, and upper-layer applications are not. Consequently, although customers can impose rigorous requirements for standards-compliant infrastructures, the criteria for application selection are more subjective. De facto or in-house standards vary from customer to customer, with a preference for, say, Windows by one customer and Solaris by another. In storage networking, upper-layer storage applications such as server clustering, backup/restore, volume management, and storage virtualization are predicated on a standards-compliant infrastructure, but these applications themselves are largely free from the constraints of standardization. Data can be backed up with one application (such as Veritas) but cannot be restored with another (such as Legato). This situation creates an inherent barrier to uniform operation of storage networking systems as well as "single pane of glass" management solutions that can span both heterogeneous storage infrastructures and applications.

The most interesting chess moves in the game of market dominance are played out at the top, by vendors of storage applications and operating systems, who leverage the benefits of underlying open systems architectures to build their own proprietary solutions. At this level, the marketplace, and not standards organizations, determines what is viable and compliant. Because customers are not aggressively driving open systems criteria to the upper application layers and seem content to accept de facto instead of industry standards, they should be prepared to accept the consequences: vendor lock-in, monopoly pricing, and incompatibilities due to proprietary implementations.

The standards process itself reveals a fundamental contradiction between the need for vendors to demonstrate both standards compliance to appease customers and differentiation in features or functionality against competitors. In standards organizations, this contradiction is revealed in the maneuvering by vendors to exert influence over the scope and content of works in progress. A vendor may, for example, submit a nearly finished technology to an open standards process, having already deployed it in products but needing the Good Housekeeping Seal of Approval of standards compliance. This tactic simultaneously removes the stigma of proprietary ownership while ensuring a first-to-market opportunity. Similarly, features of a proposed protocol or architecture may play to the strengths of one vendor and deliberately against the weaknesses of others. The deliberate stalling on Fibre Channel switch routing protocols in ANSI T11 is a classic and notorious example of vendor manipulation of the standards process. These technical parries and counterthrusts are delivered through e-mail reflector threads, proposals, and heated discussions that occur as a proposal struggles from draft to full standard.

In addition, even in vendor-neutral standards organizations such as the IETF, which fosters individual rather than company contribution, vendors are eager to occupy key positions in the standards hierarchy at least to monitor, if not control, standards processes. This Machiavellian behavior may seem in stark contrast to the rational and impartial conduct required for technology development, but it is only the natural consequence of technology evolution in a market economy. The missing counterweight to vendor influence is more active participation by customers, who, instead of being passive recipients of open systems technology, can ensure that the vendor community is focused on customer requirements. This would also shorten the validation cycle. Technology must spend considerable time in the market before the standards wheat is separated from the standards chaff.

Standardization of storage networking technology is especially complex. The melding of storage with networking has generated new technical challenges, not all of which lend themselves to uniform requirements that can be codified into standards. Those that do lend themselves to uniform requirements have fallen under the purview of separate standards bodies, which may have very different strategic charters. SCSI, for example, falls under the ANSI T10 Committee; Fibre Channel under ANSI T11; the Fibre Channel management MIB under IETF; iSCSI, iFCP, and FCIP under IETF; Gigabit Ethernet under IEEE; the Common Information Model under DMTF (Distributed Management Task Force) and potentially the SNIA; storage security under ANSI T11 for some components and IETF for others; InfiniBand under the IBTA (InfiniBand Trade Association); and so on. The SNIA Technical Council is analyzing storage virtualization technologies to see whether standardization is feasible, although the initial challenge has been simply to define what storage virtualization is. In addition, the SNIA may become a standards body for storage networking-specific areas such as CIM/WBEM management, something that would require coordination with DMTF.

The complex coordination required to sustain an open systems framework for storage networking may be hidden from customer view, but the practical outcome is not. The two main complaints still lodged against SANs are lack of interoperability and lack of comprehensive management tools. In addition to delays due to vendor competition and subterfuge, storage networking technology has been hampered by the lack of a single umbrella under which diverse standards initiatives can be coordinated and brought to fruition. As a volunteer, vendor-focused organization, the SNIA has contributed to this effort with technical workgroups that feed standards requirements to ANSI, IETF, and other bodies. A much higher level of synchronization is needed, though, to close the gap between requirements formulation and delivery of viable standards in real products to customers. More active participation by the consumers of shared storage technology would help to achieve this goal.



Designing Storage Area Networks(c) A Practical Reference for Implementing Fibre Channel and IP SANs
Designing Storage Area Networks: A Practical Reference for Implementing Fibre Channel and IP SANs (2nd Edition)
ISBN: 0321136500
EAN: 2147483647
Year: 2003
Pages: 171
Authors: Tom Clark

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net