Virtualizing application optimization is the second step in the application optimization process.
Virtualize Adding a level of abstraction above the provisioned elements of the infrastructure (for instance, the operating system) to provide a more flexible application model
The previous chapter began its review of virtualization referencing the discussions from the introduction and preparing sections. This provided the appropriate context for optimization of the infrastructure through virtualization. The comment was made that the whole idea of designing and preparing for the N1 Grid architecture provides a new paradigm. And again, within that new paradigm, virtualization is one of the key components.
The virtualization focus in this chapter is specific to the application layer. As defined, application-level virtualization means adding a level of abstraction above the provisioned elements (that is, the operating system) of the infrastructure to provide the foundation of a more flexible application deployment model. It is important to review application-level virtualization as a separate and distinct part of the architecture. The functional and technical requirements are significantly different for infrastructure virtualization than they are for application-level virtualization.
Remember that application optimization, as the second component in the N1 Grid functional architecture, maps to the third and fourth layers of middleware and applications in the SunTone AM. This mapping clearly differentiates the application level from the infrastructure level. The key differentiator is the predisposed existence of the operating system as a foundation. With the operating system (or comparable network or storage interface) being the foundational requirement, the application-level virtualization architecture has a multitude of new opportunities. An operating system-level dependency means there are practically no constraints driven by the underlying hardware infrastructure. Also, the focus upward of delivering applications as components of a service enables additional clarity of purpose. These thoughts, the "what and how" of application-level virtualization, involve adding a level of abstraction above the provisioned elements (that is, operating system) of the infrastructure.
Before diving into the details of the what and how of application-level virtualization, it is also important to review the why. While the technology might be interesting, it is natural to question the value of trying to add yet another level of abstraction to an environment. Actually, answering the question of why is easy: application-level virtualization supports the key business drivers associated with reducing cost and complexity, as well as providing direct support for the concept of strategic flexibility. These two important components of enabling N1 Grid solutions are clear reasons why application-level virtualization is relevant.
Cost and complexity are key business drivers. In the previous chapter, the discussion of provisioning for infrastructure optimization described the significant cost savings from reducing the number of actively managed operating system instances. This is valuable because IT outsourcing contracts typically use the number of installed operating system instances as a major cost driver. Not every data center is outsourced, so what this shows is that the number of operating system instances in any environment is a major cost driver. In the IT industry, the number of administrators per system is tracked. This means per-operating-system-image is a cost efficiency metric. The desire is to not only reduce the number of managed operating system instances in the IT environment but also to reduce the number of operating system instances, period. Of course, this has to be done without sacrificing any level of service delivery to the business. The architecture of application-level virtualization can specifically address this challenge.
As discussed in the second step of the building-block architecture, the importance of application-level virtualization is its direct support of strategic flexibility. Strategic flexibility enables strategic business value to be achieved, based on the foundation of core IT efficiency. The goal of application-level virtualization is to provide the foundation for a more flexible application deployment model. That flexibility, coupled with core improved efficiencies, is the bridge to delivering strategic flexibility with application-level virtualization.
The functional architectural solution of application-level virtualization is covered in the following two sections. Product-specific examples are included. This functional solution is characterized by two common deployment methods vertical scaling or horizontal scaling. The decision to deploy one method or the other is driven by the workload characteristics of the applications they will support. At the application, level, these two methods of virtualization are quite similar functionally. However, the individual technical requirements that distinguish these methodologies is important to understand.
Vertical virtualization, as an example of application-level virtualization, optimizes vertically scaled environments. Vertical virtualization borrows its name from "vertical computing." The common nomenclature for systems that typically have a large number of CPUs and scale by adding additional resources within the system. IDC also refers to this category of servers as "scale-up servers." In their IDC white paper, "Gaining Business Advantage with Scale-Up Servers: How Applications and Workloads Influence Cost-Effective Platform Decisions," Matthew Eastwood and Vernon Turner explain applications and workloads:
Vertical or scale-up servers and the workloads they support are very prevalent in the data center. Although they are a vital part of the overall infrastructure, they have been historically deployed in underutilized stovepipe configurations. One vertically scaled application is deployed on one server. To accommodate infrequent peak loads, the system is usually sized significantly larger than average loads would dictate. This oversized, one-application-per-server model creates a costly and generally inflexible environment. The functional architecture of achieving strategic flexibility dictates that the solution to this application-level problem is vertical virtualization.
Vertical virtualization is a shared environment model. Because vertically scaled servers can grow to support larger and larger applications, they can also be sized appropriately to support multiple instances of the same or even different applications. With the instance of the operating system being the main point of management for an individual server, and hence the main driver of cost, this model supports more applications with fewer operating system instances. Additionally, as various workloads are shared on a single operating system, the overhead required for peak loads can be balanced between applications, greatly reducing the need to oversize systems. A shared environment of multiple applications per instance of an operating system dramatically reduces the cost of ongoing operations, as noted by J. Phelps in "Workload Management for Server Consolidation" (Gartner, May 2002):
The other direct impact this shared environment has is its strategic business support. This model supports the rapid deployment of new applications because they can be installed on already existing servers. Supplemented by an IT acquisition model of staging modularly deployed vertically scaled servers greatly reduces the time to market of new services. As business needs change, applications that support them can be installed without waiting for the time it takes to acquire and build an independent server environment.
The optimal method of realizing vertical virtualization is through the concept of containers. Containers support a shared environment of many applications per server in a highly efficient manner. A container provides a completely isolated environment for an application, which is critical to ensure service availability on a system supporting numerous applications. Additionally, because containers are provided as an integral part of the operating system, they deliver that service without the excessive overhead that independent operating system instances would create.
Remember, reducing actively managed operating system instances is a key goal. This service has to be delivered in a manner that supports application containment, while reducing operating system overhead. Containers break the historical model of one application per server to deliver the business flexibility and cost savings that is an ever-increasing requirement in data centers. While the container is presented as a solution for vertical virtualization, it is possible that containers will provide an excellent solution for environments other than vertically scaled ones. Containers provide a great deal of flexibility to the IT environment, and that functionality can definitely be leveraged beyond the scope of what is presented here.
The discussions above focused on the functional requirements of vertical virtualization. To demonstrate an example of the technical requirements, a product-specific example is provided below. Before jumping to product examples, it is important to also review the functional requirements of application-level virtualization that optimizes horizontally scaled environments.
Horizontal virtualization, as an example of application-level virtualization, optimizes horizontally scaled environments. Horizontal virtualization borrows its name from "horizontal computing" (the common nomenclature for systems that typically have a single or small number of CPUs and scale by adding additional servers to provide additional capacity). IDC refers to this category of servers as "scale-out servers." Eastwood and Turner define horizontally scaled server applications and workloads as follows:
Horizontal or scale-out servers and the workloads they support are very prevalent in the data center. Their use exploded in recent years, particularly during the Internet build-out years of the late 1990s. Like vertical systems, they are a vital part of the overall infrastructure. Like vertically scaled systems, they have also encountered their share of historical problems. The common problem associated with this deployment methodology is server sprawl. Tens to hundreds to thousands of these servers might exist in an environment, and optimizing them for business services entails its own unique challenges. This often out-of-control environment is very hard to optimize for cost savings and can even be difficult to manage from a business flexibility perspective. The goal of horizontal virtualization is to directly address those issues.
Horizontal virtualization is a model with a goal to manage a number of servers as if they were one. Horizontal servers are typically deployed using numerous individual systems with similar application identities. Horizontal virtualization can leverage those similarities and build a management model in which the number of actively managed operating system instances, once again a major cost driver, is greatly reduced. Additionally, by highly leveraging the concepts of infrastructure optimization, one common operating system personality can be used across a multitude of application types, further driving down the number of actively managed operating system instances. This change in individual system management focus, through the abstraction of horizontal virtualization, directly impacts the bottom line of ongoing operational costs.
Like vertical virtualization, horizontal virtualization can directly impact the business by upleveling strategic support. This model supports the rapid flexing of the environment by leveraging an optimized infrastructure with a very minimal number of personalities to rapidly adjust the scale in the environment needed to support the business. A typical scenario might be a holiday rush demanding greater load capacity on a service's web front end. Web servers, which scale well horizontally, can be rolled out with quality and efficiency by leveraging this virtualization methodology.
The previous chapter on infrastructure optimization included a discussion on application provisioning. It stated that if the frequency and complexity of deployment was low and the granularity of change was high, then infrastructure optimization might be best for deploying that service component. This might seem confusing when compared to the current discussion of horizontal virtualization. The difference lies in the functional nature of the discussion. There are specific cases in which horizontally scaled infrastructures can deploy services better using the technical requirements of infrastructure optimization. However, the current discussion on horizontal virtualization is much more generic. It seeks to provide the broadest possible functional solution, which can then be implemented through different technical paths. The important thing is to focus on the functional aspects of the presentation.
An example of managing multiple systems as one is the use of a grid. A grid system can leverage the various components of the infrastructure and provide a single common interface to the applications that will run on it. This totally abstracts the infrastructure to minimize management overhead and enable a focus on the application mix that, in the end, supports the service. The grid has the potential to be a major advancement in reigning in the ongoing support costs that often plague horizontal infrastructures.
There are multiple grid solutions currently on the market that can deliver the necessary functionality. The N1 Grid Engine software is one of them. However, the N1 Grid Engine software grid as a technical solution is not the ubiquitous answer to this problem just yet. The grid is still limited in its use due to the lack of commercially available applications that are "grid enabled." Web services and the widespread deployment of J2EE-based applications promise to speed the enabling process. The specifics of the grid and its relevance to the N1 Grid architecture are explored further in Chapter 11.
Vertical and Horizontal Virtualization Examples
The functional requirements of both vertical and horizontal virtualization have been the focus of discussions up to this point. The next step in developing an understanding of application-level virtualization is to look at some examples. A vertical virtualization example shows you how the N1 Grid Containers software meets the technical requirements of vertical virtualization. A horizontal virtualization example shows you how the N1 Grid Engine software meets the technical requirements of horizontal virtualization. Finally, an example is given of the Sun Cluster 3.1 software as a technical solution that actually bridges both vertical and horizontal virtualization in its implementation of application-level virtualization.
Vertical Virtualization with N1 Grid Containers
N1 Grid Containers is a breakthrough approach to virtualization with multiple software partitions per single instance of the operating system. To reduce the complexity and cost of managing multiple servers, system administrators are consolidating applications onto fewer servers. In doing so, it becomes increasingly important for them to have the ability to maintain isolation between the applications. N1 Grid Containers offer the ability to isolate applications using flexible, software-defined boundaries. N1 Grid Containers make consolidation simple, safe, and secure.
N1 Grid Containers establish boundaries for resource consumption (such as memory or CPU time) and provide various levels of fault isolation, as well as security isolation. As processing requirements change (for example, an unexpected world event occurs and causes a surge in hits against a news-oriented web site), one or more of the boundaries of the container can be expanded to accommodate the spike in resource consumption. Fault and security boundaries are maintained when resource boundaries are updated, whether the update is done by an administrator or through predefined policies that result in automatic updates when certain conditions are met.
N1 Grid Containers provide the basis for a completely new approach to managing an IT infrastructure by enabling the data center to be treated as a fabric of interconnected computing resources that can be flexibly partitioned into isolated execution environments for application services. This provides great flexibility in provisioning application services because the application environment is transportable from one server partition to another with minimal management overhead. Application services have an isolated execution environment wherever they are provisioned, so it will be easy to consolidate applications onto fewer servers.
The use of a common model for partitioning simplifies service-level management of end-to-end application services. Sun expects to see development of high-level service management applications that enable management and monitoring of the end-to-end services and provide detailed resource usage accounting for both application components and end-to-end services. This new management paradigm will achieve significant reductions in TCO by creating management efficiencies that reduce administration costs through automation, while at the same time providing greater control over end-user service levels.
There are many varied approaches to system partitioning available in the industry. The container method directly attacks the key business drivers of the N1 Grid vision. By focusing on extending safe and secure multiple application support through a single instance of the operating system, the container approach promises to deliver increased functionality and lower costs better than any other approach. In his industry article reviewing this technology, "Sun Wants to Get Into the Zone with Future Partitions," Timothy Prickett Morgan stated that "One might call this extremely logical partitioning."
This extremely logical partitioning approach can deliver the application-level virtualization required by the architecture, doing so at the cost levels required by the business drivers. Reducing the number of actively managed operating system instances in an environment is a direct cost saver, and N1 Grid Containers make that happen.
A key component of the N1 Grid Containers is resource management. The Solaris Resource Manager software, integrated into the Solaris OS, helps system administrators manage system resources more effectively. It enables system administrators to control resources such as CPU, physical memory, and network bandwidth, for multiple users or applications to provide more predictable service levels. No single user or application is allowed to monopolize the system resources and impact others sharing the same system.
The Solaris 9 Resource Manager software enables system administrators to monitor resource consumption and obtain accounting information for billing purposes. It redefines the traditional model of hosting one application per system and offers a flexible solution that enables you to consolidate servers to reduce service-level cost, while delivering more predictable service levels.
N1 Grid Containers are the realization of a strategy that has been evolving for a number of years. Previously named the Solaris Container strategy, the implementation of N1 Grid Containers is not something that has just happened overnight. Through careful and meticulous evolution, various features have been integrated into the Solaris OS to support the end goal of containers. The complete delivery of N1 Grid Containers is the culmination of the strategy to deliver the functionality required to design and implement application-level virtualization.
The advantages of N1 Grid Containers are:
The disadvantages of N1 Grid Containers are:
Horizontal Virtualization with the N1 Grid Engine
The N1 Grid Engine computing model provides dependable, consistent, and inexpensive access to computing resources, and it helps an enterprise leverage the intellectual power of its employees by enabling them to use compute resources more efficiently. It is a model that helps the enterprise lower costs, enter new areas of development, develop better products, and deliver them faster to market. The N1 Grid Engine computing model is built on cluster grids, which provide resources to single departments (for instance, campus grids, which consolidate cluster grids throughout your enterprise, and global grids, which create very large virtual systems beyond organizational boundaries).
The N1 Grid Engine software is Sun's standard solution for managing cluster grids. It provides transparent resource access, high resource utilization, and increased throughput in a cluster grid environment at the department or project level. Through dynamic resource balancing and policy-based resource allocation, the Sun Grid Engine software automatically matches and provides resources on demand throughout an organization to users, teams, projects, and departments.
A unique feature of the Sun Grid Engine software is its ability to quickly provide computer resources where they are needed most in an organization. The N1 Grid Engine software features a policy module that keeps track of the computing resources to be spent by each user, team, department, or project in the entire organization over time. The policy module helps to eliminate any reluctance by users, teams, or departments to share resources by creating a virtual space where access to resources can be negotiated. Through negotiation or management-set criteria, the policy module establishes and enforces policies to ensure that groups within an organization get the right share of resources to do their jobs.
The advantages of the N1 Grid Engine are:
The disadvantages of the N1 Grid Engine are:
Horizontal and Vertical Virtualization with the Sun Cluster Software
The Sun Cluster 3 software takes general-purpose clustering beyond the realm of high availability (HA) by adding the simplicity of single-system manageability and the potential of seamless scalability. In essence, the cluster becomes a single managed entity and presents itself and its services to clients as if it were an individual server.
The Sun Cluster 3 framework extends the Solaris OS, enabling core Solaris OS services, such as devices, file systems, and networks, to operate seamlessly across a SunPlex™ system, while still maintaining full Solaris OS compatibility with existing applications. The Sun Cluster 3 software provides HA and scalability to everyday Solaris OS applications through continuous network and data availability. Applications that have agents written for the Sun Cluster 3 API can achieve even higher levels of availability and scalability.
Global Network Services
In the Sun Cluster 3 architecture, incoming requests from the network go to a global interface (a network interface card hosting the global IP address). The requests are then load balanced to the various instances of the distributed application running within the cluster. Outgoing packets go out to the network through the local network interface card to prevent saturation of the global interface. In the event of a failure, the global IP address fails over to a backup network interface card. In this manner, the SunPlex global network service provides a highly available global IP address, as well as the simplicity of a single system.
Global Devices and Global File Services
Data access is significantly enhanced in Sun Cluster 3 with the addition of global devices and global file services. With global devices, every domain has access to any device on the SunPlex system, such as a disk or CD-ROM drive, even if that device is not physically connected to that domain.
Global file services extend the capabilities of global devices by using shared storage devices (that is, storage with physical connections to more than one domain). The data is both highly available and accessible to application services running on any domain in the SunPlex system. Centralization of global file services on behalf of the SunPlex system facilitates a simple "single-point-of-management" paradigm. You can use the failover file service (available beginning in Sun Cluster 3.0 5/02) to fail over the file system instead of using the global file service. You can also decide to use UFS or VERITAS VxFS as the file system.
The Sun Cluster 3 framework enables a distributed application to run within cluster control. The framework makes a distributed application more manageable, and it enables automatic recovery of service levels. Instances of a distributed application can be installed and brought online or offline on multiple cluster nodes with a single procedure. Distributed applications can also use the SunPlex global network service for load balancing with its highly available IP address.
SunPlex systems provide commonly used load-balancing schemes such as round-robin and sticky. In addition, client affinity is maintained so that transaction requests from a client machine are always sent to the same cluster node. Storing application configuration data on the SunPlex global file service enables faster recovery of failed application instances. You can increase capacity and continuity by adding more domains or systems to the SunPlex system. Service levels are maintained in the event of any number of potential outages planned or unplanned.
The Sun Cluster 3 architecture delivers inherent HA services. It enables IT organizations to maintain service levels on critical applications and services. Failover services provide HA to single-instance applications by failing the application over to a backup node.
The advantages of the Sun Cluster software are:
The disadvantages to the SunCluster software are: