Using IT to strategically benefit the company is one way to both survive and strive in today's business climate. You must look at system architecture strategically. A strategic architecture enables flexibility, agility, and growth. It anticipates changes, and it does all of this in a cost-conscious way. In the late 1990s, the speed at which IT infrastructures were changing, and often logarithmic growth factors, negated some of the concern over overall cost. This is no longer true. Strategic architecture must take the overall cost into account.
The heart of strategic architecture is adaptability and responsiveness to changes, both internally and externally to the system. In the Internet explosion, systems were over-provisioned with the understanding that someday the excess capacity would be used. Today, businesses desire implementations that are much more in line with current capacity, with the ability to add more capacity as needed. They desire the ability to make changes to the software and hardware based on strategic and tactical conditions.
The N1 Grid system software can be used to facilitate strategic flexibility. By encapsulating and automating the various IT build and provisioning functions, as well as using the N1 Grid virtualization technology, system changes can be made rapidly and correctly, every time.
For example, the ERP system usually provides various business planning functions and has batch jobs that perform monthly payroll and personnel planning tasks. It is responsible for determining health provider schedules for the next month, and it consults many factors and conditions on how this schedule should be developed. This activity needs to be done in a timely fashion, but it is not something that is performed often. These systems sit idle for most of the month, working only on the one to two days at the end of the month that require this additional processing.
The provider reservation and booking system has additional capacity at the end of the month because new reservations are not taken until the next month. IT administrators would notice this trend by using the observability capabilities built into the N1 Grid system. The N1 Grid software could be used to repurpose the reservation systems so that they are part of the ERP system, based on a schedule. The repurposing could be initiated from an enterprise management tool or manually by a system administrator.
Because the N1 Grid system is tested and requires little manual intervention, these types of changes are performed accurately and in a timely manner each time. The business is able to use other systems to meet its demands, without having to dedicate systems to a once-a-month task. Eventually, based on the policy and service levels described when the system is built, the N1 Grid system will be able to detect the excess capacity and make the recommendations or changes automatically. Until this time, the base N1 Grid technologies, such as the N1 Grid PS can be used to perform many of the functions that are initiated and managed by IT personnel.
Today, the IT personnel act as the optimization engine, using the tools provided to institute changes. Eventually many of these decisions will be automated, but first, the telemetry and observability, along with the control systems, must be in place. Covert virtualization and self-optimization are the ultimate goals of the N1 Grid vision. Strategic flexibility requires both infrastructure optimization and application optimization. It builds on these capabilities to achieve optimization of the data center.
The N1 Grid PS or jump start can be used to take new or existing hardware, configure the network, install the operating systems and applications, and enable the service it provides. Usually, this is performed on common platforms, using standardized builds. Standards are important because they reduce the number of system types that need to be managed to reduce the complexity of the managed environment.
In the ERP system example, various components that make up the ERP service are built up using automation. First, the servers are provisioned using a common operating system load, security modifications, and common agents. The N1 Grid PS provisions the servers so that each server is attached to the correct VLAN. If the system needs to be reprovisioned, it can be "soft-recabled" by the provisioning system.
The ERP system uses common platforms, with standardized servers across all tiers of the service. For example, all web servers use blade servers, all application servers and print servers use mid-range servers with four to eight CPUs, and the database servers are on clustered 24-processor servers.
The N1 Grid PS can be used to provide end-to-end service provisioning, including installing complex applications, integrating various management systems to enable monitoring, and performing other service provisioning tasks. The ability to deploy applications independently of the build process enables deployment virtualization. Using common platforms automatically created with N1 Grid PS, moving applications around becomes easy (application life cycle mobility). Add N1 Grid Containers, and mobility is even easier because each application has its own run-time environment. Containers enable enhanced virtualization and better security, which enables the IT staff to increase application density on larger systems without concerns of performance, cross-contamination, and security.
Extending the ERP example into automated application deployment builds upon the foundation provided by the infrastructure optimization discussed in Chapter 9. Common platforms are provisioned with a basic operating system and common agents, then put into a resource pool that can be used for application deployment. Application deployment is performed by using the N1 Grid PS. It installs the various application binaries, customized code, and configuration files onto the targeted server. It can also register the server with other systems, such as DNS or load balancers, based on the needs of the service.
By encapsulating the installation and configuration of the software, the N1 Grid SPS can then be extended to provide additional features, such as application mobility, rollback, and change control. The encapsulating enables you to optimize the data center, based on various needs, such as time, cost, capacity, and business conditions.
Optimization and Flexing
Using the combination of the automation tools, as well as telemetry and observability information provided by the system, IT administrators can make changes to the data center based on business and technical conditions. Front-end web servers can be brought up or down based on capacity. Back-end batch processing can be enabled only when needed, not requiring dedicated hardware.
Observability and Optimization
All data centers have various management products that provide assistance and data for making day-to-day decisions. Usually, these are products such as CA-Unicenter or IBM's Tivoli. They might help to automate some common tasks, such as adding a new user, but this section is only concerned with their ability to provide telemetry as part of an observability system, as follows:
The key to making optimization decisions, whether automatic or human initiated, is information. System telemetry is necessary to complete the feedback loop. The data center is a complex environment, and there are many areas that require instrumentation to provide telemetry, such as:
Many of these areas traverse servers, storage systems, networks, and administrative systems. Management systems are complex, but they are important as data centers move toward more automation and self-optimization. They are absolutely necessary to help facilitate making the right decisions in the IT industry.
If data centers are architected with strategic flexibility in mind, the ability to rightsize solutions based on their needs becomes quite simple. As mentioned, systems are rarely reconfigured after they are installed. However, using the N1 Grid technology, applications, and their services can be repositioned to take better advantage of hardware that is available.
Typically, application and server sizing is performed for worst-case maximum capacity to ensure high quality of service. Generally, when an application is released, usage might be high, but it might slow down. Using a combination of the N1 Grid software products, an IT administrator can move applications to lower-performance hardware until the additional capacity is needed. This is called rightsizing.
The above example of rightsizing could also be referred to as vertical rightsizing, which is being able to increase or decrease specific server capacity, based on business conditions. Rather than adding additional instances of a service, the service is moved from a smaller server to a larger server. Estimates for resource sizing at the time a project goes into production are often high to add an additional level of QoS insurance. Techniques for making sound decisions on initial sizing are not always correct. This results in servers with low average utilization that could be better used elsewhere, or perhaps, never purchased at all.
Vertical rightsizing enables an administrator to use observability data to determine that a server is sized incorrectly. Application mobility provided by a product, such as the N1 Grid SPS, could be used to move the application to an adequately-sized server. An example would be moving an application server from a four-CPU server to a two-CPU server.
Systems can also be scaled horizontally. Horizontal rightsizing enables the N1 Grid software to add service instances from a spare pool or to remove instances and return them to the pool. Horizontal rightsizing is focused on adding additional capacity using the N1 Grid software by adding instances of a service. The ability to rightsize and optimize the data center hinges on the ability to observe the data center environment. Much of the discussion of the N1 Grid architecture has emphasized the control capabilities of the N1 Grid software products. A strategic data center, enabled with the N1 Grid technology, also requires the instrumentation necessary to monitor the entire system.
Opportunities for Strategic Flexibility
How can strategic flexibility be used within the data center today? Like many of the target deployments for the N1 Grid software, applications, and services with moderate-to-large amounts of changes are prime opportunities.
Service Life Cycle Mobility
The abilities to create services, move services, and remove services from operation are prime examples of the goals of the N1 Grid software. They can be achieved today using the tools discussed in Chapter 9. This section includes examples using the N1 Grid software.
Increasing and Reducing Application Density
The N1 Grid software can be used to move applications and their services off smaller dedicated systems to larger shared systems to increase application density. Applications can initially be deployed and monitored on dedicated systems, and the IT staff can decide when to move the application to a shared platform. This increase in application density can provide additional cost savings and reduce management overhead by reducing the number of operating system instances.
FIGURE 10-1 shows the provisioning of applications using N1 Grid SPS. A common SAN is attached to all servers. The N1 Grid SPS first provisions the application on server A. IT decides that the application can be deployed on a shared server. The container that the application is provisioned within is reconfigured on the new server, B. Server B now has two applications, each running inside its own N1 Grid Container. Eventually, the second application needs additional server capacity (CPU) and is moved to another server, C.
Figure 10-1. Application Density and Mobility
Of course, being able to share applications requires some additional work when designing and implementing them. Concerns such as account names, file systems, storage systems, and other items must be discussed and planned accordingly with the development and IT operations staff.
Service Promotion and Life Cycle Management
The N1 Grid software can also be used to assist with the workflow of moving code and applications between development, test, and production environments. The N1 Grid software products can be integrated with a workflow or trouble-ticketing application. The integration enables the N1 Grid software to handle the process, while the workflow product handles the human interaction.
In FIGURE 10-2, the application configuration and binary files are stored as N1 Grid SPS components. As the application moves from development, to test, and finally to production, this same component is used with different configuration data depending on the environment. For example, the IP addresses for each application instance can be different for each environment.
Figure 10-2. Application Life Cycle Management
Increasing Utilization Using the N1 Grid Software
The N1 Grid software can be used to provision grid instances on systems that have available capacity. Other applications can be running on these systems. The Solaris 9 OS Resource Manager software can be used to provide containers to ensure adequate resource availability and performance. The grid (such as the N1 Grid Engine) could use a container for grid-based applications, using system resources as allowed by the overall Solaris 9 OS Resource Manager software management system.
In FIGURE 10-3, server A is running a grid engine application dedicated at 100 percent of the available system shares or capacity. Server B is running two applications: a grid application and a web server. Each container has 50 percent of the available capacity. On server Z, the grid application has 10 percent capacity and is able to use more, if available. The web server has 80 percent, if needed. Using the N1 Grid software enables administrators to define runtime limits (or policy) and to use available capacity, even if a system is sitting idle.
Figure 10-3. N1 Grid Software and Grid Applications
Strategic flexibility (and improving data center efficiencies) does not need to be entirely implemented to gain incremental benefits. Sun Client Services promote iterative design and implementation. This enables an overall road map to be developed to address KBDs and CTQs with targeted implementation phases for a phased approach throughout the project.