Virtualizing the Infrastructure


The second step in the building block process is the virtualize phase.

Virtualize Providing an abstract view of a collection of discrete compute, network, and storage resources in the infrastructure layer for reducing management complexity and increasing operational efficiency

Virtualization was discussed twice before in earlier chapters. Because the word is often overused in the IT industry, great care should be taken to ensure that any discussion of virtualization is explained with specific clarity and context. Virtualization is a term used to convey the abstraction of some entity for example, how an operating system virtualizes the server hardware to provide greater flexibility and utilization.

In Chapter 4, virtualization was discussed as part of the preparation for N1 Grid solutions. Preparing for virtualization requires that the design and delivery of services be thought of in new ways. The promise of greater flexibility and improved utilization through the application of virtualization alters the traditional stove pipe system design methodologies. The whole idea of designing and preparing for the N1 Grid is that this system provides a new paradigm. Within that new paradigm, virtualization is one of the key components.

As an added value to the adoption of the N1 Grid system and a key component that must be planned for to achieve that change, how is virtualization specifically leveraged? Keeping within the context of this chapter, which is specifically focused on the architecture of infrastructure optimization, virtualization is leveraged to provide greater flexibility and utilization of the components in the hardware layer. This section examines how virtualization is implemented to optimize the compute, network, and storage hardware elements of the overall IT environment.

As the next building block of infrastructure optimization, virtualization will build on the foundation of common platforms, enforced standards, and modular deployments. The goal of virtualization is to provide an abstracted view to deliver greater flexibility and utilization. To achieve that view, the implementation of virtualization must still interface with each of the underlying individual components. The greater the diversity of those components, the greater the challenge in delivering virtualization. This just serves to highlight the hierarchy of efficiency that the N1 Grid architecture is promoting.

Virtualization can be delivered without the support of an optimized build process; however, as stated, the increase in overall diversity makes the job very challenging. A limited, standardized, and modular infrastructure of compute, network, and storage hardware greatly facilitates the virtualization process. The concept of constructive virtualization was defined as using abstractions in an appropriate and ultimately productive manner. This concept is important to the discussion of infrastructure virtualization. As virtualization is pushed forward in throughout the IT industry, the specific features required to deliver infrastructure virtualization cannot be overlooked.

Hardware infrastructure has its own unique issues of cost and complexity. Those issues can only be solved with specific infrastructure virtualization. This value should not be overlooked in a rush to deliver functionality further up the stack. It is fully recognized that constructive infrastructure virtualization is in the early phases of broad industry deployment. As greater features and functionality are introduced in this space, the adoption of infrastructure virtualization will increase and the value of its implementation will improve dramatically.

Constructive infrastructure virtualization can be delivered today. Compute, network, and storage virtualization are discussed individually in this section. The discussions focus on the specific cost and complexity problems that exist, the opportunities to implement virtualization, and the future for the individual components of the infrastructure.

Compute Components

The IT industry has never seen an explosion in the deployment of compute resources as it did in the late 1990s. The race to the Internet drove vast deployments of computing power across the enterprise. Unfortunately, that excessive growth came at a huge expense. Companies were using typical multitiered architectures and too often bound by traditional business buying rules, and server sprawl became prevalent. Because services and applications were not designed to share, each had to be deployed with its own independent and excessive compute capacity to meet peaks in demand. The typical compute environment had simply gotten out of control.

Virtualization of the compute infrastructure holds great promise to alleviate some of the problems associated with the server sprawl that plagues many data centers today. The vision of virtualized compute components in the N1 Grid system is to hide considerable physical complexity. N1 Grid software virtualizes servers into dynamic resource pools. This dramatically improves resource utilization and reduces management complexity.

The N1 Grid software's compute virtualization performs the actual coordination of the various platform facilities or components by converting these high-level requirements into lower-level constructs, such as:

  • The number of processors or nodes required to provide the appropriate performance, scalability, and availability of a service

  • How service components should recover in the event of failure

The resulting benefits include:

  • Lower visible complexity for administrators

  • Lower administrative overhead and reduced TCO

  • Lower risk through limited opportunities for wrong decisions

  • Increased availability

  • Increased predictability

  • Improved utilization through shared resources

Essential qualities like availability that were once engineered on a per-server or per-application basis become an inherent quality of the virtualized compute infrastructure and are applied across the entire data center. Furthermore, as the requirements on a particular business application change, resources can be automatically assigned or removed without manual intervention.

Compute virtualization will advance significantly over time. Greater emphasis on integrated data center solutions can enable seamless deployments for IT managers. Current virtualization technologies continue to expand their support for more heterogeneous environments, and the efficiency of the data center continues to improve. Finally, overall maturity leads to more robust tools that deliver higher levels of virtualization performance and ease of use for data center operators and administrators.

Network Components

Data center networks are an increasingly critical part of service delivery. No longer does the network sit in the background. Today, load balancers, firewalls, and other network components are integral parts of the system. Networks are growing more complex because of their important nature within the data center, and services will not work without a high-quality network infrastructure.

New demands are placed on the network through automation and virtualization of systems. Over the last two decades, changes in the network typically required changes in physical cabling. With N1 Grid solutions, the data center is "wired once" with the ability to soft-cable systems based on the services they provide and their requirements.

The N1 Grid software can use VLAN technology to change server network connectivity. For example, the N1 Grid PS software uses a wiring markup language (WML) to document the layout of the data center network. When a server needs to change to a new subnet, the provisioning server software can automatically change the network configuration on the switches.

When servers are installed, they are no longer tied to physical IP addresses during the installation. Instead, the IP addresses are dynamically provisioned with the server. Although IP addresses are tied to a physical port, network virtualization enables IP addresses to be dynamically assigned to networks.

VLANs have become commonplace in the data center. Soon, servers will use other network technology to enhance their flexibility and capabilities. VLAN tagging (IEEE Standard 802.1Q) enables network devices (servers, switches, and firewalls) to set a flag in the packet header to control which network they need to communicate with. This requires additional security implementation aspects to secure the communication, but it is possible. This increased flexibility enables servers to communicate on many different networks as needed or allowed, and reduces the number of network configuration changes.

Storage Components

Storage components represent the most prevalent issues with the infrastructure: cost and complexity. Even before the proliferation of server and network components got out of control, storage administrators grappled with managing the explosion of storage components in the environment. Although the industry is moving to larger and larger individual storage devices, the growth and importance of online data has far outpaced any hardware trends. The increasing demand for more and more online data has driven the cost and complexity of the storage environment to incredible levels.

Because storage components trend to be an overly complex environment earlier than the compute and network infrastructure, there are more management advancements available in the industry. The primary example is the widespread move to storage area networks (SANs). While the adoption of SANs is a step in the right direction, there is still significant opportunity for optimization of the storage infrastructure.

Infrastructure storage virtualization provides very specific implementation opportunities and is the next step in the simplifying of the I/O stack. It is the processing structure that connects applications to data on storage. Just as network attached storage (NAS) provided an abstraction layer between the application and the logical data and SANs enabled the separation of physical from virtual storage resources, virtualization enables the separation of logical from virtual storage resources, and it further simplifies the management of storage.

Infrastructure storage virtualization enables the following:

  • Dynamic assignment of LUNs

  • Dynamic expansion of LUNs

  • Mapping of LUNs across heterogeneous storage devices

  • Snapshots

  • Heterogeneous remote copies

  • Storage firewalls

  • RAID

In addition, virtualization promises "virtual utilization" of storage beyond 100 percent by charging users for a "virtual" amount of storage, but only allocating a portion as it is required. Some virtualization vendors insist that high-end RAID systems are no longer needed after virtualization engines are installed. This assertion is similar to the arguments made when RAID technology was introduced. RAID originally stood for redundant array of inexpensive disks. Because RAID used parity striping to protect data against a disk failure, it was argued that high-availability disks could be replaced with low-cost commodity disks because the data would be protected. The flaw in this argument was that performance degraded when a disk failed, and an outage was required to replace the failed disk and to rebuild the parity group. RAID also failed to address other factors that would ensure high-availability and high-performance data access. It was not long before the acronym RAID began to mean redundant array of independent disks. Problems that are similar to the problems of RAID with commodity disks apply to the virtualization solution.

So, what does the future hold? Storage technology providers still seem to be at odds on how to realize the promises of virtualization. Some say that virtualization capabilities should reside at the server level. Some prefer the fabric level (such as SAN fabric switches or appliances), and some insist that virtualization should occur at the storage system level, built into storage arrays and devices. Others argue the benefits of "in-band" or symmetric virtualization in which data and control data pass through the virtualization "engine" over "out-of-band" or asymmetric virtualization. The control data is passed through a virtualization "engine" that resides outside of the actual data path. Each of these approaches has its own set of associated disadvantages and limitations, including interoperability, management, and performance issues. In reality, virtualization must be a coordinated effort, shared among the server, SAN, and storage.

Virtualization must be addressed on two levels: an access level in which storage addresses are remapped and redirected to create a virtual pool of capacity, and a control or management level that can discover, provision, and maintain the data path between the application and the storage.

Virtualization Examples

The following virtualization examples show you how N1 Grid software products can be used to virtualize your environment.

N1 Grid Provisioning Server

Sun's N1 Grid PS solution radically transforms the dismal economics of computing. With the N1 Grid PS software, you can transform traditional computing resources to create a centralized pool of IT resources (for example, servers, storage, firewalls, and load balancers) that can be repurposed within minutes using a web browser. With minimal change to the existing technology within the enterprise, the N1 Grid PS connects all the disparate systems in a data center and does the following:

  • Automates data center tasks, reducing costs and capital expenditures

  • Enables rapid and efficient adaptation to changing requirements and enterprise pressures, increasing productivity and agility

  • Optimizes resource usage across the data center, increasing availability and reducing the costs of labor and equipment

Fully integrated with an organization's computing resources, the N1 Grid PS software creates an infrastructure fabric (i-fabric) that is a centrally managed, device agnostic, flexible, and repurposable set of computing resources. The software provides control over this infrastructure, enabling the creation and management of logical and secure subsets of computing resources into server farms. The N1 PS software solution is wired once, following a precise and repeatable design, while the resources are logically reconfigured as often as needed to satisfy changing enterprise needs.

The N1 Grid PS software provides a comprehensive automation solution that enables the design, configuration, deployment, and management of multiple independent and secure server farms from an intuitive HTML-based user interface (FIGURE 8-2). Using the interface, you can make data center operations more efficient by automating labor-intensive and error-prone tasks.

Figure 8-2. N1 Grid Provisioning Server User Interface


N1 Grid Data Services Platform

Storage virtualization technology traditionally has resided in the intelligent controllers of a storage array like the Sun StorEdge T3 arrays. These array controllers provide volume management functions like RAID functionality, disk striping, and disk partitioning. The controllers present the host a virtualized logic unit number (LUN).

For example, two virtual LUNs can be created across eight disks in a T3 array. A host would see only two LUNs (virtual disks). The benefits of the virtualization are:

  • Increased utilization by sharing large disk drivers between multiple hosts

  • Decreased host processing by off-loading RAID functions from the host to the array controllers

  • Increased performance

  • Decreased administrative complexity

Currently, the virtualization technology is moving from array controllers to storage switches. This way these virtualization switches enable users to create virtual volumes from many different types of storage devices like arrays and JBODs from different vendors. For example, you can make a virtual volume (LUN) that comprises disk space from one T3 array and another EMC array to present to a host.

The following are the advantages of providing virtualization features in a switch:

  • Increased utilization by sharing different sizes of disk arrays from different vendors

  • Decreased administrative complexity at the SAN level

  • Increased availability and scalability

  • Ability to provide additional services like data migration and backup of virtual volumes

The PSX-1000 is the first Sun storage product that provides storage virtualization at the SAN (switch) level. The PSX-1000 can have a maximum of 32 ports that can be connected to as many as 256 storage devices and 128 servers. As many as 256 virtual volumes can be created from across the storage devices. After you create the virtual volume, you can export (map) it to one or more servers as a virtual LUN. Currently, only Sun StorEdge devices are supported. Multi-vendor storage devices will be supported in the future.

The PSX-1000 can be configured in a fully redundant chassis configuration. All hardware components are hot swappable. The PSX-1000 features include:

  • LUN mapping and masking

    The virtual volumes (LUNs) that are created from across the storage devices need to be mapped to the servers. The mapping makes the LUNs visible to the server. For the other servers, the LUN is not visible and cannot be accessed.

  • Secure virtual storage domains (SVSD)

    Secure virtual storage domains are logically partitioned domains that share the the entire storage resource. You can assign each SVSD storage resources (that is, Fibre Channel ports) so that servers can access the resources belonging to the same SVSD.

  • Volume management

    The virtual volumes that are created from across many storage devices can be managed online without disrupting I/O activity. For example, the volumes can be expanded or contracted without having to shut down access to the storage devices. You can create concatenated volumes (of many storage devices) and striped volumes (across multiple storage devices).

  • Volume snapshot

    Up to eight snapshots can be taken of the virtual volumes. The snapshots can be taken online without having to disrupt access to the volume itself.

  • Direct access volume

    With this feature, a disk array with existing data can be connected transparently to the PSX-1000, and the existing volumes from the disk array can be exported directly to the hosts. Thus, the data from older storage devices can be migrated into a virtualized environment.



Buliding N1 Grid Solutions Preparing, Architecting, and Implementing Service-Centric Data Centers
Buliding N1 Grid Solutions Preparing, Architecting, and Implementing Service-Centric Data Centers
ISBN: N/A
EAN: N/A
Year: 2003
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net