Infrastructure Virtualization Impact


What does virtualization mean for how you design and deliver your services, and how do these concepts currently function in your existing environment? Do you see value in focusing design efforts on providing the capability for 8000 simultaneous web connections per second or relying on a business unit asking you to provide a specified number of boxes that you might, or might not, already have the capacity or experience to manage? As long as the servers are secure and reliable, do you really care where the web service is run? Shouldn't business economics and available capacity determine whether the HTTP service is run on blades, on four-CPU boxes, or on part of a Sun Fire 15K server?

One goal of the N1 Grid vision is to provide a controlled environment with the ease of provisioning that enables service delivery to occur. The environment provides the ability to provision a service, to grow and shrink capacity during use, and eventually, to decommission a service without having a single discussion about the servers, switches, or storage on which it has been running.

Part 1 of this book introduced the historical forces that created the current trends in network computing. Although virtualization has a long history, it is exactly that history that requires care and rigor when virtualization is discussed. People tend to consider or view virtualization from their own perspective. TABLE 4-3 shows how various community stakeholders might view the data center layers. The shaded areas show the parts of the system that should or could be virtualized away from their perspective of the IT world.

Table 4-3. View of Community Stakeholders

SunTone Layer

CIO

Ops. Mgr.

IT Director

App. Dev.

Sys. Admin.

OS Eng.

Network Eng.

Break Fix

Application

        

Virtual

        

Upper

        

Lower

        

Hardware

        

Infrastructure

        


Keeping these viewpoints in mind, this section includes several aspects of N1 Grid system virtualization. First, it introduces virtualization in a historical context. Next, it discusses several elements of virtualization compute, storage, and networking, as well as various combinations of them. Finally, it places these virtualized elements in the context of introducing options where it is possible to measure benefit by applying virtualization in your environment.

Historical Impact

The process of mapping actual (that is, physical) IT resources to simulated (that is, virtual) resources is an idea that has been around the computer industry since the beginning of computing. This approach to problem solving is often referenced by stating that "All computer problems can be solved by adding a layer of abstraction." Although this is a sound methodology when used appropriately, experience shows it is in no way a panacea. Abstractions, also commonly referred to as virtualizations, can cause more problems than they solve if applied inappropriately. Many application performance issues are a result of the poor implementation of virtualization. Constructive virtualization, using abstractions in an appropriate and ultimately productive manner, has enabled the computer industry to make great strides in overcoming complexity and integration issues. Constructive virtualization can and will continue to be a valuable methodology in solving computer problems.

Successful and widely understood virtualizations exist throughout the IT environment. Virtual memory and VLANs are two examples that actually use "virtual" to describe themselves. Many other abstractions are so prevalent that their virtualization characteristics are almost overlooked. The prime example of a highly productive, yet nearly invisible virtualization, is the modern operating system. Applications do not decide how to schedule time on a processor or how to access data from memory those are the jobs of the operating system. Processor and memory access are highly important and non-trivial tasks in computing, yet almost no one worries about the intricacies of these tasks. The operating system has so effectively abstracted these tasks that they have become almost invisible. The idea of covert virtualization, being an abstraction that is highly productive yet nearly invisible, is the ultimate success in the abstraction process. Covert virtualization is the sign that "the problem is truly solved."

As with the implementation of all computer abstractions, a transition in thinking must take place. These distributed, virtualized network computing ideas change the thinking about architectures, the design of services, the application development, and certainly the application and service deployment. In the meantime, current IT operator and administrator actions are centered on the compute, network, and storage elements of the present. These elements are not going to disappear any time soon, so they must remain the current focus of achieving virtualization. Therefore, the current methods of abstraction that enable compute, storage, and network virtualization strategies can be discussed without overemphasizing the future state in which virtualization of all of the layers of the stack is fully realized. It is important not to get lost in covert virtualization without first working through constructive virtualization.

Elements of Infrastructure Virtualization

The main purpose of infrastructure-level virtualization is to provide an abstracted view of a collection of discrete compute, network, and storage resources for the purpose of hiding complexity and improving flexibility and productivity. An important beginning to the virtualization process is to recognize that a series of components could be better managed if they are abstracted. As these abstractions are crafted in an appropriate and ultimately productive manner, the predominant interactions remain with the individual components. In this way, virtualization also provides both an opportunity and the means to abstract away complexity. One way this can be accomplished is by analyzing and deciding to expose only those interfaces or operational "knobs" (tuning) that are absolutely necessary. The abstraction is first thought of as something layered on top of the more familiar individually managed components.

Most IT operators and administration personnel perform complicated operating system installs and manual network and storage configurations. Their actions are focused on the compute, network, and storage elements of the present. As new levels of virtualization are introduced into the IT environment, the prevalent interaction remains with the non-abstracted components. The following sections discuss each of these elements. Design aspects that include secure partitioning virtualized resources and implementation aspects that include a well crafted, executed, and verified policy for when and how such partitions can be made are discussed in other sections of this chapter.

Compute Element Virtualization

Compute-element virtualization turns a potentially heterogeneous pool of CPU resources into a provider of the horsepower to run an application. The canonical size or type of a unit of compute power is limited only by the ability of the virtualization component to access and control it. For example, although racks of blades often contain their own virtualization suite that provides a single point of control over the blade CPU resources and a stored application image to be run on any available blade unit, individual servers in a data center usually have variances. Because these variations include elements such as the number and type of CPUs, memory, internal disks, and network connectors they contain, the virtualization software must be flexible enough to understand the differences and accommodate, control and provision these resource types.

Network Virtualization

Network virtualization provides the capability to physically wire elements into the network switches once so that those elements can be used many times in different ways without having to rewire them for example, configuring the network switches so the server acts as a web server receiving Internet requests in the demilitarized zone (DMZ) VLAN and another time as a database server in the well-protected back-end data layer VLAN. This "soft cabling" treats the network as a sharable resource pool, which is allocated as needed to match the dynamic business requests and configured as needed to create security domains that contain the compute, operating environment, application, and storage layers to deliver the service, as required.

Storage Virtualization

Storage virtualization is designed to unify storage management and to provide the following features to the storage space:

  • Pooling of heterogeneous storage resources

  • Secure provisioning

  • Simplification of management

Storage virtualization enables you to rapidly create, expand, and reassign virtual storage resources without physically reconfiguring storage arrays. Role-based access and the storage management software can:

  • Pool, partition, concatenate, or stripe attached storage of any size or performance characteristic

  • Create data platform volumes that span multiple physical storage devices

  • Safely house multiple departments or applications on a single infrastructure

Storage devices are connected to and present their logical units to the data platform, which divides the LUNs into partitions. Administrators build virtual volumes from the partitions that are presented to hosts. The option of secure partitioning enables an administrator to allocate ports or physical connections to a single storage domain, separating and preventing access from other hosts or domains. The servers mount the volume LUNs as Fibre Channel targets or distributed storage lookup service to access the completely virtualized storage resources.

Implementation Combinations

There are several possible implementation combinations. The first of them is doing no virtualization at all. After you read about other N1 Grid software capabilities (for example, application provisioning, observability, policy, and automation) in the subsequent sections of this chapter, you might find that some of these capabilities solve higher-priority issues in your data center. Among the reasons to choose not to virtualize as your first N1 Grid solution activity might be:

  • You might want to specify a specific realization of your service.

  • You might prefer to specify that some distributed service components be explicitly deployed together.

  • You lack security policies, processes, and procedures for defining, sharing, and deploying the decomposed SunTone AM layers.

  • Your storage, network, and server organizations are not ready to work together to create and manage this type of resource pool.

  • The rate of change in your application layer greatly exceeds that in your operating system layers, so more business value can be extracted from automating the provisioning and mobility of applications and services.

In these instances, virtualization provides little additional value to a service or a service deployment.

In contrast, there are areas in which combining the virtualized compute, network, and storage elements might facilitate very useful activity, for instance:

  • Connecting the server into a network fabric (hardware layer)

  • Enabling the server to boot over the network (lower layer)

  • Facilitating the reception of application code on top of the loaded operating system (lower and upper layers)

  • Isolating elements quickly and easily from the rest of the network in the event of a compromise

  • Facilitating network attached file system mounting (lower and upper layers)

  • Placing the network into or out of a load-balanced pool (lower layer)

  • Connecting the compute resource into a cluster group with other compute resources (lower and upper layer)

  • Providing application-level quality of service (application layer)

In addition to its usual role as facilitator of communication between tiers, the network is:

  • Facilitating the establishment of the stack on a formerly empty server

  • Acting as the central entity to connect a single device into the load balancing, cluster, and foundational (LDAP, DNS, and NTP) elements that the device might require

  • Enabling connectivity between the N1 Grid system and other available service grids, facilitating what can truly be a completely distributed system of interconnected services

The previous points demonstrate how the network is the physical and logical organizing principle of the N1 Grid vision. Most of the technology is in place today to enable network element virtualization. The hurdles to successful implementation are generally in the smooth blending of server and network operational activities and the data and security models (for instance, which IP address to pull from the pool or which VLAN to associate with that IP address) to centralize and simplify the compute, network, and storage virtualization tasks. If you choose virtualization as your first N1 Grid software task, the following section discusses how you can prepare to use some of these capabilities.

Preparation for Infrastructure Virtualization

This section outlines some of the implications of virtualization that you should prepare to address from a business, architecture, and operations perspective.

Common Namespaces

To effectively use a virtualized environment, you need to develop naming conventions for servers, applications, storage, and network resources to support the types of production operations currently performed in your data center. You should create a convention that assigns unique and transportable names to file systems, application names, host IDs, and network ports because to leverage virtualization, naming conventions must not be tied to servers, domains, network-host bus adaptors, or other fixed components. The naming must be unique because there is no advance knowledge of where these elements might be deployed. The naming must be transportable to avoid namespace conflicts during element mobility (for example, it would be unfortunate if you tried to deploy two file systems with the same name on a server). Remember to consider all of the possibilities: the namespace might need to support production life cycle mobility when a service is initially instantiated and then promoted through the development, test, and production environments. Another type of mobility to support might be within a life cycle phase (for example, a component might move between several machines within the test environment). Your use cases and operational activities can guide you in thinking about this aspect of virtualization.

You must also consider the mobility of accompanying monitoring and management components so that they follow a particular deployable entity when provisioned or moved. The naming should also enable easy integration with other parts of the base infrastructure (for example, DNS or LDAP), both when a deployable entity is first provisioned and after a component has moved.

The SunTone AM layers and tiers provide a means to begin to decompose applications and to consider the naming and virtualization implications on a level of granularity that is useful. For example, naming conventions could include a finite number of defined elements, such as:

  • Life cycle phase (development, testing, quality assurance, and production)

  • Business application name (payroll, ldap, foo, and bar)

  • Tier name (client, presentation, and business numbers with a numbering scheme or range that is appropriate for repeated units of capability for instance, ten web server units might be in a load-balanced pool)

  • Layer in that tier (hardware, operating system, load balancing, web, cluster, and applications)

You should create names that are appropriate for your environment. The following are example naming formats:

  • dev_ldap_presentation_02_os

  • prod_payroll_data_01_cluster

Additional information for users or roles (root, oracle, admin, and ftp), groups, IP, DNS naming, and provider-consumer dependencies to run (for example, requires the Solaris 9 OS) can be applied, appended, and stored for retrieval, as needed. Limiting the number and naming choices is key to gaining control of the many possible combinations in the existing IT environment namespace. However, just as important as organizing the existing space is establishing a naming convention foundation on which to stop future sprawl. Getting the application developers to start using your mutually agreed-upon naming conventions for future application development will simplify the process going forward, whether you initially choose to implement virtualization or just start to prepare for it.

Solaris Container Model

Using the Solaris Resource Manager software provides the framework for working with N1 Grid Containers. With the Solaris Resource Manager, originally introduced in the Solaris 9 Operating Environment, system administrators can establish resource boundaries (also known as resource pools) for a specific application, eliminating competition of resources with other applications. System administrators can also establish resource boundaries for CPUs, physical memory, swap space, and network I/O bandwidth, and use the Solaris Resource Manager to manage those resources.

N1 Grid Containers deliver the technology for server virtualization. N1 Grid Containers isolate software applications or services using flexible, software-defined boundaries. Containers are software-based shells for running applications so that they have a high degree of isolation from other applications running on the Solaris OS. Containers uniquely deliver resource, security, and fault isolation, while running under individual instances of the Solaris OS. They provide both manageability and efficiency. For example, if a fault occurs in a user-level process, the container boundary prevents the propagation of the failure to other containers that are running in that operating environment. They also provide:

  • Software partitioning between zones

  • High resource utilization with multiple resource manager-controlled zones within an operating system

  • Repository for many small applications in a single operating system

  • Entire service life cycle on a single domain

  • Development, testing, staging, and production environments

FIGURE 4-3 illustrates the sub-CPU granularity of zones:

Figure 4-3. Sub-CPU Granularity of Solaris™ 10 Zones


Integrating with the Solaris OS Resource Manager and assigning unique project names to provisionable components can begin to solidify the mobile naming conventions and metadata organization and hierarchy. This activity also prepares you for density and mobility, where the security and needed capacity of these deployable entities are maintained as they are moved around. The naming conventions for being a provider of N1 Grid Containers and the conventions that describe the capabilities of the provider and consumer so that a match can be made are key areas of the N1 Grid virtualization process.

Until you implement Solaris 10 to enable separate kernel space and security zones inside the operating system, you can create policies that use an "allowed with" and "not allowed with" attribute for each application and service in your data center. The following list contains policy examples for a mobile component:

  • Requires a clustered environment in which to register

  • Will run or will not run on Linux

  • Need a server with a gigabit Ethernet network interface card (NIC)

  • Will run or will not run on a server tuned for Oracle9i

Coupled with chroot(1M), where appropriate and correctly implemented, these policies enable you to begin to control performance and secure coexistence of applications by including security as a part of the change management and move operation use cases. This work serves as a foundation for the eventual use of features in Solaris™ 10. Those features will be available through the presence of separate local superuser (root) passwords for each zone.

Data Security

Mobility of applications between machines often requires that the storage and file systems follow applications to a new location. Mobility between machines and new or moving connection points must not violate existing security or QoS requirements.

Security policy combined with trust models and risk profiles must provide clear guidance as to when or if it is appropriate to move applications or data between machines, how such a move should be secured, as well as any rules for application or data co-existence on the same machine or security zone. Individual use cases (for example, moving dev_ldap_presentation_02_os from one server to another) can guide the considerations regarding the people, processes, and tools that need to support the virtualization and provisioning activities in your environment.

Foundation Services

Foundation services need to support virtualized environments where the eventual location of service components is not defined in advance. For many services (for example, identity and web services), this is accomplished by publishing a virtual interface that often fronts a load-balanced set of components that deliver the service or advertise a lookup service that responds to and directs or redirects the request for a web service. For services with virtual interfaces, the location of the foundation service must be known or obtained when a deployable entity is moved, provisioned, or reprovisioned. Many distributed service types handle this mobility with their native methods of discovery.

Other foundation services require notification when an object is moved from a virtualized resource. Observability systems and DNS are examples of foundation services. New or updated DNS information might need to be distributed when an application moves to a different virtualized server that has a different IP address than the original server. Agent-based instrumentation tools or other functionality that might have license and host ID constraints require additional pre-move or post-move activities to enable foundation services to properly follow the addition or removal of services.

Service use cases that move, add, or remove service components should include important substeps testing for the presence of these foundation services. For the data center environment to be completely virtualized, you must ensure that the following types of activities are completed:

  • Unique naming conventions for the virtualized resources used as foundation services, such as DNS, observability systems, and the Solaris Resource Manager

  • Licensing model to support mobile applications that could reside on different compute, storage, and network types, sizes, and operating environments

  • Means to catalog and store the information that represents the separation of applications from their compute, storage, and network resources

  • Rules for resource consumers to resolve contention for the same resource provider

  • Security policies for the provisioning dependencies and constraints of mobile services into these virtualized services (for example, no web server front-end service should be put into a cluster container)

  • Understanding the virtualization, description requirements and representation for a particular provisionable layer of the service stack

  • Measurements of the service capability of a particular hosting resource in the service stack

  • Means to match consumers to providers of the service they require

Additionally, the need for a common namespace was demonstrated, and considerations for a container model, data security, and foundation services were discussed. You can implement the N1 Grid software and receive a lot of business value without virtualization, but to fully implement a mobile flexible N1 Grid solution, you should eventually implement virtualization. How much virtualization to implement and when to include each amount is a choice you will need to make.



Buliding N1 Grid Solutions Preparing, Architecting, and Implementing Service-Centric Data Centers
Buliding N1 Grid Solutions Preparing, Architecting, and Implementing Service-Centric Data Centers
ISBN: N/A
EAN: N/A
Year: 2003
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net