Provisioning the Application


Provisioning is the third building block of the application layer functional architecture.

Provision To take a component, or series of components, to a higher level of functionality. Provisioning is specific to the application layer.

Typically involves installing application packages on top of the operating system infrastructure to provide business services.

Because of its frequent use, it is important to clearly define application-level provisioning as a separate building block in the architecture. The context of applications and services is the key value component of provisioning at this layer of the architecture. Just like at the infrastructure layer, provisioning at the application layer is a very tangible and common activity in most IT environments. However, unlike the infrastructure layer, manually installing application software to instantiate a service and making configuration changes to those application components are far from routine tasks. Application-level provisioning is more complex than infrastructure provisioning. The large number of application components that make up the services that IT delivers, along with the number of options to install each component, results in a very complicated environment.

The highly complex set of application provisioning tasks is most commonly delivered through a combination of manual processes. As with any manual process, it is prone to error. Worse, because these tasks are more complex and more diverse, the potential for error due to inexperience is much higher. The installation procedures are different for every application, and the environment-specific documentation is usually poorly done or outdated. Lengthy command-level system interactions contribute to the lack of deployment quality prevalent at the application layer.

Even if the process is achieved without error, the excessive amount of time it requires is a burden on the IT staff. This very real problem is directly addressed by application optimization and the migration to an automatically provisionable environment. Automated application provisioning enables a consistent and repeatable deployment process and dramatically decreases the time required to deliver services. Automatically deploying applications and services to a virtualization layer on top of the already-optimized infrastructure rounds out the architecture of application optimization and completes the foundation needed to achieve strategic flexibility.

The value of having an architecture that supports the automatic provisioning of applications and services is very high. Building out the entire stack of a particular service is similar to building an inverted pyramid of complexity and value. The higher up the stack the process goes, the more complex its delivery becomes. At the same time, the value of that component to the overall service increases.

Implementing the application optimization building block might be the most difficult of the building blocks, but it has the highest value. And, that value is not just in supporting application optimization by itself. The value is also a major contributor to the overall value of strategic flexibility. Delivering services that support the overall business is the business of IT. The application or service layer is the largest, most complex, and most valuable part of that delivery. Automated deployment of application and service components, which delivers dramatic improvements in quality and time, is tremendously valuable.

Automated Deployment

One of the central ideas of the whole N1 Grid vision is to move from a server-centric view to service-centric view. Nowhere is this desire to create a change in view more leverageable than at the application layer. Commonly, application deployment today is a host-based manual process. Each server in an environment must be manually and separately configured, deployed, and maintained by using compact discs, custom scripts, manual processes, and outdated run books. All of these activities preserve the server-centric view. Changing that view is accomplished through an automated process that eliminates the error-prone components of an application install and supplements that process with the required features for a data center, or service-centric, view of the process.

Automated deployment of applications and services is the most visible component of the application provisioning building block. Application-level provisioning involves the act of taking a component, or series of components, to a higher level of functionality. Accomplishing this task with little to no manual intervention is the key concept of automated deployment. In this context, provisioning includes installing the bits, configuring and binding them to the underlying infrastructure, incorporating them into the greater service, and starting the component. Using a tool that can install the application bits and complete the higher-order tasks associated with true provisioning is critical to getting past the unmanageable world of scripts and manual deployments and achieving the desired optimization.

Achieving application optimization through provisioning is facilitated by leveraging an auto-deployment tool. To truly move to the service-centric view, more than just basic functionality must exist. A data center-level view of applications and a business-level view of services must be prevalent to support the service-centric environment. While the IT focus is often on the application components like a web server or a database server, the business is focused on the service delivered by the combination of these components.

These somewhat different views are both supported by the fact that services are provisioned through the combination of applications. Some of the functionality required to support the hierarchical view of applications as components of a service includes:

  • Support for use case-driven custom deployments of application components

  • Ability to combine numerous installable components in a logical way to directly support the instantiation of services

  • End-to-end automation of the deployment process, including distribution, configuration, and startup of packaged and custom applications

  • Real-time generation of application configuration for the target environment

  • Central repository that tracks all deployment and configuration data for reference and reconstruction

  • Detailed logs of every action taken by the system across all applications and managed servers

This move toward functionality that supports automated deployments of applications and services is in direct support of the goal of strategic flexibility. Focused on the key business drivers of the N1 Grid vision, application-level provisioning can:

  • Accelerate application provisioning from days to minutes

  • Reduce the management costs for application operations

  • Increase application availability by minimizing configuration errors

  • Ensure standards compliance through a provisioning audit trail

These features provide very specific value, but they are also key to supporting higher-level goals. Those goals, achieved through data center optimization, are the focus of Chapter 10.

Components and Plans

The functional view of the architecture presented by using automated deployment focuses on what the environment looks like. Remaining consistent with the SunTone AM, the primary method of design and implementation is to be use case driven.

Automated deployment gets its value from eliminating the error-prone manual tasks of application installations. To do this, the manual steps must be captured in their optimal state and then automated. This capture happens through use cases. It is important to develop a use case for the best practices installation of each major application or service component that will be automated. This might seem time consuming up front, but without it, an automated install will not be successful at delivering what is really needed. The use cases directly support the creation of components.

The concept of components is key to the goal of standards-based and repeatable application component installations. A component is the use case-driven, predefined instantiation of an installable entity. A component can be a patch, a package, a series of files, or an entire comprehensive middleware application like a relational database. Components are what deliver the core functionality of automating the installation of applications through a standard and repeatable way. The use cases for each application guide the automation process to develop the component that meets the needs of the specific installation. Although a component can be as standardized as possible, it must support runtime variables for host-specific installation settings, such as port numbers and IP addresses.

The common standards-based and repeatable application installations supported by components are important to drive a reduction in the permutations of individual applications. The unfortunate truth of manual installations is that almost every single installation is different. Without functionally enforced standards, each application installation becomes the best effort for that particular day. This results in massive permutations across the entire data center and leads to greatly increased costs of operations. If every application installation cannot be guaranteed to be the same, it must be expected to be different. This specifically drives the server-centric view of operations that the N1 Grid architecture seeks to avoid. The transition to components as a standardized way of controlling installation solves this problem.

While components are key to providing the basics of application installations, plans represent the functionality that delivers the service-centric view. Plans are simply the data center-level combination of components that establish a service. Because components represent all of the pieces of a service, a plan assembles the correct components together and enables the automated installation of entire services at the data center level. Plans also enable the high-level view required to deliver the goals of application optimization that directly support strategic flexibility.

The technical requirements that dictate how to implement an automated provisioning environment are not dependent on the specific components of the architecture. An optimized infrastructure provides better efficiency across the board, but it is not a requirement for provisioning. In this specific case, application-level virtualization would add even more efficiency, but it is also not a requirement. In their IDC white paper, "Server Provisioning, Virtualization, and the On-Demand Model of Computing: Addressing Market Confusion," Paul Mason and Dan Kusnetzky review virtualization and provisioning in the industry as follows:

In summary, one may have server virtualization without server provisioning, and one may have server provisioning without server virtualization. However, it is also clear that the goal of flexibly exploiting virtualization concepts based on both virtualizing up a large number of smaller servers into a smaller number of virtual servers and virtualizing down a very large server resource into many smaller resources cannot be achieved in a cost-effective manner without a server provisioning solution. (IDC, June 2003)

As expected, the path of execution, and the value delivered from that journey, are specific to each IT environment. Along with achieving greater efficiency built through the building block architecture, the design and deployment of an application-level automated deployment strategy can dramatically enable strategic flexibility. For a better understanding of how this functional architecture can be achieved, see "N1 Grid Service Provisioning System" on page 199 for an example of application-level provisioning.

Observability Integration

It is important to not overlook the needs of the observability infrastructure. Observability is a term used here to group all of the common monitoring and management tools that are typically used in an IT environment. Tools that monitor hardware components, processes that keep track of application availability, and whole environments that manage the operations of the data center all fall into the category of observability. The new dynamic nature of the data center that is enabled by strategic flexibility is the important linkage, and the need to evolve the infrastructure is critical to the overall management of this dynamic environment. The same static and server-centric view that permeates the core infrastructure of the IT environment also exists in the observability layer.

As the core infrastructure evolves to service-centric operation, so too must observability. Application optimization, deployed to its fullest extent, must be a key enabler to support a dynamic transition of the observability infrastructure. Likewise, the observability infrastructure must also evolve to support a more dynamic, often virtualized, and highly automated data center. These changes cannot happen in isolation because both environments are dependent on each other. Evolving the dynamic nature of the observability infrastructure is not discussed here, but you should ensure that this important dependency is not overlooked.

N1 Grid Service Provisioning System

N1 Grid Service Provisioning System software enables IT operators and administrators to automate the process for provisioning applications, including the distribution, configuration, and setup of packaged and custom applications, patches, and updates. The N1 Grid Service Provisioning System performs infrastructure virtualization and provisioning. IT organizations can implement a highly efficient and highly flexible environment to operate their data center.

With the N1 Grid SPS, IT operators and administrators can:

  • Accelerate application deployment time from days to minutes

  • Reduce costs by automating manual, repetitive tasks

  • Increase application availability by minimizing configuration errors

The N1 Grid SPS applies an object-oriented approach to:

  • Application components

  • Tasks that IT operators perform on application components (for example, deployment, configuration, and analysis)

This object-oriented approach ensures that all of the intelligence about an application is automatically taken into account every time that component is acted upon. This consistency makes data center operations more accurate and less prone to error. Through application awareness (that is, knowledge of what an application requires as a whole), IT operators gain unprecedented control over applications and data center operations.

The features and benefits of the N1 Grid SPS are:

  • Automated deployment End-to-end automation of the deployment process, including distribution, configuration, and startup of packaged and custom applications

  • Deployment simulation Complete simulation of deployments before any changes are made

  • Dynamic configuration Real-time generation of application configuration for the target environment

  • Dependency management Ability to encode application dependency information that is checked during deployment simulation

  • Application comparison Ability to track application configuration drifts and to pinpoint unauthorized changes to servers

  • Version control Central repository that tracks all deployment and configuration data for reference and reconstruction

  • Logging and reporting Detailed logs of every action taken by the system across all applications and managed servers

Using these features, the N1 Grid SPS is most commonly deployed to:

  • Automate and manage software rollouts, patches, and upgrades

  • Develop models of existing deployment processes

  • Determine what software is already installed on hosts

  • Compare the configuration of hosts

  • Monitor and maintain documented and consistent configurations

The N1 Grid SPS infrastructure (master-server architecture) is currently supported on the Solaris 2.6, 7, 8, and 9 OS releases. Additional heterogeneous support includes Windows 2000, Linux, and AIX. The N1 Grid SPS is a distributed software platform that consists of the following special-purpose applications:

  • Master server

    The master server is a central server that stores components and plans and provides an interface for managing application deployments. The master server can:

    • Store components and plans in a secure repository (embedded SQL relational database accessible only to authorized users)

    • Perform version control on the objects stored in the repository

    • Authenticate IT operators and ensure that only authorized users perform specific operations

    • Include special-purpose engines for performing tasks such as dependency tracking and deployments

    • Provide an HTML interface in Netscape Navigator

    • Provide a command-line interface

  • Remote agents

    Remote agents are small Java technology-based management applications that reside on each target host. Remote agents can:

    • Report server hardware and software configurations to the master server

    • Start and stop services

    • Manage directory contents and properties

    • Install and uninstall software

    • Run operating system commands and native scripts specified in component models

  • Local distributors

    Local distributors are master server proxies that optimize network communications across data centers and through firewalls. One or more can be deployed. Local distributors can:

    • Minimize network traffic during deployments

    • Minimize firewall reconfiguration

The N1 Grid SPS uses an object-oriented methodology for managing application components and plans, known as the N1 Grid SPS object model. This object model is the heart of the product's implementation and provides the flexibility and extensibility required to deliver a valuable solution to each and every IT operator and administrator. The object model consists of components and plans.

Object Model Components

Components are software that is configured, managed, and monitored as distinct units. A component can be as simple as a collection of files and directories or as complex as a complete application such as BEA WebLogic. Operating system packages, patches, and J2EE JAR files are additional examples of components. Basically, a component is a software entity to be installed, managed, and monitored, whether it is a single file or a monolithic application.

The N1 Grid SPS stores each component, along with metadata about the component, including how to:

  • Install and uninstall the component

  • Configure the component

  • Control the component

  • Analyze the component

  • Start up and shut down the component

By reading this metadata and comparing components, the N1 Grid SPS can identify and track dependencies among components. Additionally, components support embedded variables, enabling server-by-server data, such as port numbers and IP address, to be set at runtime, rather than being hard coded in traditional data center scripts.

The N1 Grid SPS has a large component library that contains prebuilt templates. Some of the templates include simple files and directories (for instance, J2EE EAR, WAR, and JAR files, Solaris OS packages, Solaris OS patches, Sun Java Web Server software, BEA WebLogic, and Oracle). Component models are written in XML and are easily extensible. The N1 Grid SPS console supports easy modification of existing templates and authoring of whole new templates for customer-specific application deployments.

Object Model Plans

Plans are XML files that document what actions should be performed on which server. Just as it captures information about applications in component models, the N1 Grid SPS stores information about data center procedures in plans. Operators select plans from a central version-controlled repository. Plans bring together one or more components, the server or servers they will be deployed to, and the process and procedure for deployment into a fully-automated service-deployment element. The N1 Grid SPS uses plans to perform operations, such as deployments, coordinate activities among components during operations and coordinate activities among servers during these operations. By providing a common format for execution, the N1 Grid SPS replaces the chaotic variety of scripts too commonly used in data center operations. The plan is what brings it all together. The plan is what IT operators and administrators execute, and the plan delivers the true value of this important piece of data center optimization infrastructure. The plan also provides tangible value to the business, and if properly delivered, it directly addresses the CTQs of data center operations.

A combination of a distributed infrastructure supported by a relational database backend, proxy servers for security and extensibility, Java technology-based agents, and web-based or command-line interfaces, provides a robust platform for application and service provisioning. Together with the components and plans object model, the N1 Grid SPS delivers a mature level of application optimization through automated provisioning and a key building block to the entire N1 Grid architecture.



Buliding N1 Grid Solutions Preparing, Architecting, and Implementing Service-Centric Data Centers
Buliding N1 Grid Solutions Preparing, Architecting, and Implementing Service-Centric Data Centers
ISBN: N/A
EAN: N/A
Year: 2003
Pages: 144

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net