Building Infrastructure Optimization

The first step in the building block process is the build phase.

Build Leveraging common platforms (compute, network, and storage), enforced standards, and modular deployments to create the foundation of the "wire once, deploy forever" optimized infrastructure environment

Common platforms, enforced standards, and modular deployments are the foundation of the optimized infrastructure. In today's fast-paced IT world, the focus is usually on delivering new services more rapidly or delivering existing services more cost effectively. Too often, these two efforts are diametrically opposed. In the quest to deliver new services, elementary build decisions are often based on how fast something can be delivered rather than on the overall cost effectiveness. When attempting to deliver existing services more cost effectively, rapidly made build decisions often hinder the success of the delivery. This self-defeating circle can be broken by using common platforms, enforced standards, and modular deployments.

The emphasis on these elementary build concepts is about "non-dependent additive efficiency." The lack of establishing one, two, or all three of the elementary build concepts will not cripple an IT organization; however, it will limit its ability to operate more cost effectively. Even more importantly, build inefficiencies place limits on the ability of an IT organization to move to a highly flexible, service-centric solution like the N1 Grid.

Common platforms, enforced standards, and modular deployments are not magic bullets required for N1 Grid operating environments. You could build an N1 Grid operating environment without them, but it would not be as easy. These elementary build concepts do, however, provide an important foundational discipline to the infrastructure layer of the data center architecture. The opportunity to develop this disciplined foundation, and the value it can deliver to an evolving IT environment, should not be overlooked.

Earlier chapters provided insights into the many opportunities that are available to evolve an IT organization into a more cost-effective and flexible operation. It was also discussed how IT organizations can vary greatly in their operational maturity. It is also important to understand that some IT organizations are already very methodical in implementing elementary build concepts. Unfortunately, too many organizations do not have firmly established build concepts.

The unique thing about the success of the infrastructure optimization is that it becomes an almost invisible part of the overall operational maturity of the organization. There is no fanfare in this implementation, and no fancy software tool is required. Simple execution is the path to success.

In the following sections, the concepts of common platforms, enforced standards, and modular deployments are explored in more depth. Common, standard, and modular are simple words that can mean many things. With that in mind, explicit definitions are given for these simple words to provide the appropriate context for their relationship to infrastructure optimization.

Common Platforms

In the context of the N1 Grid system, a common platform is a mutipurpose set of compute, network, and storage systems that are selected for their ability to deliver a wide range of business services at the required service levels. Using a minimal set of familiar systems and deploying them in more fundamental ways builds flexibility and ultimately delivers reduced cost and complexity.

Goals and Benefits of Common Platforms

The goal of common platforms is to reduce the cost and complexity of an IT environment without sacrificing the flexibility required to deliver business services. At first, this goal might seem trivial to accomplish, but achieving this goal is often challenging due to requirements to maintain flexibility. This section contains three areas of focus to help you drive the overall goal of achieving common platforms.

Limited, Yet Diverse

The primary goal of common platforms is to decrease cost by decreasing the variations in the overall IT environment. The wide array of available systems today enables individual developers to match 90 percent or even 100 percent of their requirements to a specific platform. As processor technology matures and chip multithreading becomes prevalent, there will be even more overlap among the performance of various systems. System choice is a good thing for customers who are looking to match, as closely as possible, business requirements to individual platform needs. Unfortunately, that same choice drives the excessive variation that is already prevalent in the IT environment.

Excessive variation in an IT environment drives up operational costs. An environment might consist of twenty different hardware platforms, running fifteen different operating systems, requiring ten different patch bundles that are installed in five different maintenance windows. Overall, this is a systems operations nightmare. Different hardware requires different training. Too many different operating systems require more people to run them. The IT organization, in delivering a wide variety of choice of infrastructure, creates a self-defeating spiral of cost issues that need to be resolved.

"Limited, yet diverse" is a concept focused on moving the system choice pendulum closer to IT efficiency and deriving a limited set of systems for business deployment, ultimately reducing variation. Accompanying this limited system selection is an understanding of the diverse IT needs of the business. The limited system selection must impact business flexibility as little as possible. These two concepts might seem at odds with each other, but with a little bit of compromise, great efficiencies can be achieved in the IT environment.

Although the business should agree to work with a hardware platform that provides an 80 percent match of its requirements, instead of the standard 95 percent match, the commitment to reduce variation should not be limited to the business. The IT organization must also be willing to compromise as well. The compromise might involve the IT organization delivering a platform with 110 percent of the business requirements in order to stay within its new limited selection list. This give and take is to be expected, but it is well framed within the overriding cost benefits of reducing variation.

There is no black-and-white answer to the question of what a "limited" system selection list would be. The idea would be to start with a minimalist approach of selecting only a small, medium, and large platform from the twenty or so platforms that can be used today. Reviewing the list against the needs of the business might require that the list be expanded to five or even seven platforms. Your interpretation of small, medium, and large depends greatly on the platforms you typically deploy. Your business might expect a 12-CPU platform to be large. Another business might see that platform as a medium platform. The whole idea is to build a limited selection list that best fits your needs.

An example platform selection list is provided in TABLE 8-1. There is no magic number. The whole idea is to reduce the number of platforms you are dealing with today. The more reduction in variation you have, the more inefficiencies can be driven out of the environment, and the greater real impact on the IT bottom line you will have.

Table 8-1. Platform Selection List




Tier 1: web services

Tier 2: application services

Tier 3: database services

Sun Fire V240 server

Sun Fire V880 server

Sun Fire 6800 server

2 CPUs and 8 Gbytes of memory (maximum)

8 CPUs and 64 Gbytes of memory (maximum)

24 CPUs and 192 Gbytes of memory (maximum)

Familiar, Fundamental, and Flexible

"Familiar, fundamental, and flexible" is the second concept that promotes the goal of reducing cost and complexity in the IT environment through common platforms. Similar to the discussion of limited system selection, this concept is also focused on the dual roles of reducing variation while maintaining business flexibility. While limited system selection is focused on reducing the platforms used in IT environments to a minimal list, the concept of familiar, fundamental, and flexible is focused on expanding the traits of the platform selection list to facilitate a greater impact of the platforms. Working from the absolute minimum as a quantitative practice in no way means that the qualitative nature of that same list must decrease. The ideas presented here strive to improve that qualitative nature.

Familiarity is a trait to be strived for in common platforms. It is defined as having an inherent, informed, and comfortable level of knowledge. When reducing platform selections to three or five systems, the platform selection list still has to provide the majority of the functionality of the platforms on the more expansive list. A focus on familiarity can help achieve this because the inherent knowledge of familiar platforms leads to broad improvements in many aspects of productivity. The need for familiarity does not exclude the introduction of new platforms. Familiarity with a platform can already exist, or it can be achieved. In either case, familiarity translates to reduced IT operations and administration training costs.

Fundamental is defined as "forming or serving as an essential component of a platform." The discussions focusing on IT maturity and IT services in earlier chapters referred to the ever-present nested nature of platforms. To deliver the best efficiency through reduced variation, the common platforms must strive to be the fundamental components in an overall IT system. Arguing over subcomponents of a common platform just leads to wasted time and increased diversity. Again, there must be compromise so that greater efficiency is achieved through a higher level of focus.

Flexibility is the concept of improving the qualitative nature of common platforms. Flexibility is defined as enabling adaptability and supporting responsiveness to change. Flexibility is not a trait focused on the limited-selection process (that is, the limit today is three systems and limit tomorrow is five systems). That kind of flexibility would only detract from the efficiencies gained. Flexibility is a trait that is delivered to the overall IT environment by the concept of common platforms. Common platforms should be selected to collectively provide the greatest diversity to the business as possible. An in-depth review of past deliverables and future projects can be used to develop an extensive list of potential platform requirements. Mapping those requirements against a limited platform selection list reveals the extensibility of those limited platforms. The greatest degree of requirements coverage by the smallest number of platforms equals the flexibility of the common platforms.

Next Best Thing

The final goal of driving toward common platforms is to not take the concept to the extreme. As defined, the goal of common platforms is to reduce the cost and complexity of an IT environment without sacrificing the flexibility required to deliver business services. If you read the trade magazines and listen to industry presentations, every major vendor has the solution to your problem. Unfortunately, the IT industry is infamous for hyping the "next best thing." All too often, the next best thing is played up as the solution to all of the IT problems. History has shown that this is rarely the case.

In discussing the move to a limited yet diverse infrastructure, a move toward IT efficiency was supported. A current industry trend to achieve that is blades. Blades are an industry segment focused on compact, low-end, single-purpose servers that can be more efficiently managed in pooled environments. Blades have many endearing features; however, they are not the single answer to common platforms. Even though there is a lot of focus on the practicality and extensibility of the architecture a blades environment delivers, it simply cannot do all things for all people. Whether it be blades or the next major technological innovation, care must be taken in constantly balancing the need to meet business deliverables. The next best thing for the industry does not automatically equal the next best thing for you.

The importance of innovation in the industry is very important. However, the zest for the next best thing must always be tempered with practicality. IT provides a broad array of services to the business, delivered through a diverse set of technologies. Those diverse technologies have evolved for a reason. Somebody needed what they delivered. The idea of common platforms is not to totally eliminate diversity but to strike a better balance. The idea of "the next best thing" is to point out the potential dangers involved in taking change to the extreme. Achieving the required balance is not easy, but the payoff to the IT bottom line is achievable.

Storage and Network Infrastructure

Although the discussions so far have focused on compute platforms as systems and the need for common platforms at the server level, the same concepts apply to the storage and network infrastructure of the data center. In many cases, the storage and network components of the typical data center are already closer to the concepts of common platforms. As an example, the proliferation of SAN-based storage infrastructures have driven many of the common platform concepts. There is still a long way to go to vastly improve overall data center efficiency, and no component can be left unexamined.

Enforced Standards

A standard is something established by authority, custom, or general consent as a model or example. A standard is set up and established by authority as a rule for the measure of quantity, weight, extent, value, or quality. An enforced standard is a process for establishing a set of rules, as well as the necessary periodic evolution of those rules that will ensure common platforms within the IT environment. This includes an understanding that the success of a common platform environment is directly dependent on both the wide dissemination and consistent enforcement of those rules within the organization.

Goals and Benefits

Enforced standards are a natural follow-on to the discussion of common platforms. As defined, the goal of common platforms is to reduce the cost and complexity of an IT environment without sacrificing the flexibility required to deliver business services. A central point of that goal is to drive limited system selection within the IT environment. Having a goal by itself does not solve the problem; you also have to have a method to achieve that goal. Within the realm of common platforms, the goal is common platforms, and the method is enforced standards. To help drive the achievement of that goal, the three supporting components of enforced standards are detailed in this section.

  1. Establishing the rules

    It might seem trivial to point out that the first step in achieving enforced standards is to establish the rules. Often, the components that are seen as the most trivial are also the very components that are most neglected. The value of standards within the IT industry is well understood. The execution toward achieving that value starts with establishing the rules.

    The opportunity and effort of establishing rules in IT organizations varies greatly. If there are no rules currently established, the opportunity is great, but so is the effort. If there is already a lengthy list of rules, then this effort might only require a tune-up. The centralization of IT management or the lack of it also affects the opportunity and the effort of establishing the rules. These many organizational variations help to point out that although establishing rules might seem like a trivial task, it is often far from it.

    Established rules can take many forms and cross many aspects of the overall IT environment. Rules can be nested inside other rules. There are big rules and not-so-big rules. Rules must exist to drive standardization; however, they should not stifle the ability of the IT organization to deliver services to the business. Similar to the discussion of common platforms, there is the need to strike a balance. A study of the rules already in place in a particular IT environment, coupled with leveraging the OMCM IT maturity information in previous chapters, provides the foundation for establishing rules.

    In establishing rules, you must also take into account the dynamic nature of the IT industry and the need to evolve the standards, as appropriate, over time. The standards process cannot be manipulated on a weekly basis and still be effective. A quarterly or semi-annual review process is probably sufficient to capture the needed change without unnecessary churn.

  2. Publishing the rules

    Establishing and maintaining standards relies heavily on change acceptance. Change acceptance depends on communication. The best standard is no good if no one knows it exists. Explicit and timely publication of the rules is key to enabling the concept of enforced standards.

    The best medium to use to publish the rules depends on how pervasively the rules need to be distributed. The most common method used today is to simply create an internal web site for the information. This is a great approach and will enable broad-based, up-to-date, and consistent communication within a company. If standards have to be communicated outside company boundaries, other methods must be used. Compact disc-based publication or just plain old paper copies will work. In moving to methods beyond a central distribution mechanism like a web page, specific attention must be paid to document control and versioning. Old and outdated published standards are often equivalent to no standards at all.

  3. Enforcing the rules

    Rule enforcement, or more appropriately, the lack of rule enforcement, is traditionally the single area where the whole concept of enforced standards fails. Well-established and widely published rules provide the necessary foundation to the standards process, but enforcement of those rules enables its overall success. An IT operations vice-president at Sun once described his goal as "ruthless standardization." That statement bluntly summarizes this concept that standards must be enforced to be effective, and they must be enforced all of the time every time.

Example of Enforced Standards

Standards can take many forms in an IT environment. The previous discussions focused on the standardization of common platforms to drive IT efficiency. An important aspect of common platforms is consistent installation of those platforms. SunSM Services has a rigorous standard for system installation called the Enterprise Installation Standard (EIS). This "enforced standard" is a great example of how IT efficiency is achieved through enforced standards.

The EIS charter is to sustain a global, unified, and viable installation methodology that is consistent among Sun Client Services, strategic partners, and channel partners. The EIS mission is to drive the ongoing development and usage of EIS, resulting in consistent, high-quality, and cost-effective installations that speed time to deployment and provide the foundation for enhanced stability and performance.

The EIS methodology consists of:

  • A defined set of deliverables

  • Standard documentation for the EIS documentation tool

  • Technical installation checklists (see "Enterprise Installation Standard" in Appendix A on page 267)

  • EIS compact disc

The EIS has the following objectives:

  • Deliver consistency and quality in system installation

  • Improve availability

  • Minimize errors during installation

  • Establish standard patch installation and upgrades

  • Verify installations

  • Establish standard system handover and sign-off

  • Communicate best practices

  • Increase efficiency

  • Create standard installation documentation

The robustness of the EIS methodology exemplifies the concept of establishing the rules. The goal of publishing the rules is achieved through consistent and widespread compact disc subscription-based communication of the evolving standard. And finally, through Sun Services global management, there is complete support for the standard. Thus, the final component of rule enforcement is achieved. The EIS is a prime example of the execution of the enforced standards process, and it is a key enabler to realizing infrastructure optimization.

Modular Deployments

Modular means constructed with standardized units or dimensions for flexibility and variety in use. In the N1 Grid architecture context, modular deployment is a data center-level infrastructure build methodology that enables advanced service delivery. Leveraging the common platforms that are delivered through enforced standards, IT flexibility is established through the consistent and additive use of components. A modular approach also ensures delivery of the appropriate horizontal or vertical scalability that is required to maximize service efficiency.

Modular deployments represent the highest level of achieving infrastructure build optimization. Leveraging the components of common platforms and enforced standards, modular deployments provide even further efficiencies in the build process. Modules produce efficiency by creating a higher level of IT infrastructure deliverables out of the common platforms and enforced standards from which they are built.

Most IT services are created through the assembly of numerous individual components. If every project is going to require a number of systems to be assembled every time, it makes sense to build a methodology that supports modules in advance. Modular deployments enable IT services to be delivered much faster, but because of their dependence on common platforms and enforced standards, the services are delivered just as robustly as they are with individual components. The goal of modular deployments is greater IT infrastructure build efficiency. This methodology is expanded in the section below.

Building Modules

Modules are business-appropriate preassembled collections of common platforms. The methodology to build modules is similar to the development of common platforms. In other words, no one, two, or three versions of modules are perfect for every IT environment. The idea of achieving flexibility is important. Studying the record of past and present system deployments and including a sufficient study of future needs should enable the development of common platform deployment trends. Using those trends, you should study how efficiencies can be improved through a higher level of integration of components. This enables modules to be built that are more appropriate for the specific environment.

Deploying Modules Horizontally

Horizontally scaled infrastructures are usually characterized by multiple small systems and throughput workloads. Horizontal scaling supports applications that have loosely coupled parallel workloads and that leverage many independent operating system images, each deployed on low-cost hardware. Web servers, proxy servers, and other partitionable applications are all examples of applications that leverage horizontally scaled infrastructures. Foundation services are another example of services that are horizontally deployed. The unique characteristics of this environment can be leveraged to deliver efficiency through modular deployments.

Using the example platforms in TABLE 8-1, a small Tier 1 system would be a two-CPU Sun Fire V240. If common platforms have been adopted, then all one-CPU and two-CPU applications will use this server. As individual business needs are met, various web servers, proxy servers, and other small applications are deployed over and over on the same common platform model. Each individual installation requires a purchasing cycle, shipping, and a data center buildout. All of these efforts repeated over and over build huge time and cost inefficiencies into the IT service delivery process. If business trends show that the average consumption of small Tier 1 servers is ten per month, developing a modular deployment strategy makes a lot of sense.

In this specific case, 30 systems would be consumed on average every business quarter. Normal data center build procedures have fifteen of these systems being installed in a rack. Other items like local network switches consume additional rack space. A modular deployment plan would include ordering the racks fully populated and factory assembled every three months. In this way, the fifteen-server rack configuration becomes the module and two of these racks are ordered each quarter. Data center build time is basically eliminated because these modules can be factory preassembled. Purchasing cycles are taken out of project timelines because modules are preordered. The common platform approach is further solidified because with preordered and preassembled hardware, there is no opportunity for variance.

Again, the module that is the most efficient for an environment depends on the specific trends of that environment. Modular deployments in the horizontally scaled space hold great promise for reducing the cost, complexity, and time to market of ongoing projects.

Deploying Modules Vertically

It is easy to picture the efficiencies of a modular deployment strategy in the horizontal space. In the vertical space, modular deployments are not as common, but they can provide significant benefits. Vertically scaled infrastructures are usually characterized by large symmetric multiprocessing (SMP) systems and transactional workloads. Vertical scaling supports applications that have tightly coupled, large workloads and require a shared pool of processors and a single large memory instance. Databases, data warehouse applications, and scientific high-performance computing are all examples of applications that leverage vertically scaled infrastructures.

The vertical environment is distinctly different than a horizontal infrastructure and requires a different modular deployment methodology. Using the example platforms in TABLE 8-1, a large Tier 3 system is a 24-CPU Sun Fire 6800. Again, the key to developing a modular strategy is to study the specific usage trends of these systems. If business trends show that the average consumption of large Tier 3 servers is one per quarter, and they are installed as single standalone systems, a higher-level modular deployment probably does not make sense. However, if the consumption is two systems per quarter and each of those is deployed in a failover framework with another system, then modules make sense. The key differentiator with vertical systems is that modular efficiencies can be gained with much lower volumes.

The other difference is that modules do not have to be hierarchical preassembled collections of systems. The efficiencies of process alone can justify modular deployments. In the large system case, it is unlikely that one module of four systems will show up from the factory different than the other modules. The value is that if the entire configuration is treated as a module, the benefits of preordering and faster time to market add value. The greatest value is driven from the consistency of deployment. A modular deployment methodology would drive a higher level of system integration, which would lead to a more robust overall installation.

In summary, modular deployments improve the overall efficiency of IT service delivery. Leveraging common platforms and enforced standards, modular deployments create a higher level of infrastructure integration. The efficiency of this methodology is one more important building block to achieving the overall foundation of infrastructure optimization.

Wire Once, Deploy Forever

Infrastructure optimization was defined as leveraging compute, network, and storage common platforms, enforced standards, and modular deployments to create the foundation of the "wire once, deploy forever" optimized infrastructure environment. With the concepts of common platforms, enforced standards, and modular deployments well understood, what exactly is "wire once, deploy forever?"

"Wire once, deploy forever" represents the goal of infrastructure optimization. This goal represents the idea that the infrastructure must support the overall goals of N1 Grid solutions, providing a services-based, yet highly dynamic, IT environment. The IT infrastructure should be ultimately flexible in its ability to deliver IT services. This flexibility should not, however, come at the expense of constantly modifying the underlying physical components of that infrastructure. If every time a new service is deployed, an operator has to physically rewire a switch, change a storage configuration, or manually reload an operating system, the goal of flexibility is lost. "Wire once, deploy forever" is an important goal that infrastructure optimization promises to deliver.

While the goal of "wire once, deploy forever" cannot be achieved with infrastructure optimization by itself, and it cannot be achieved in its entirety without it. Equally important to this goal is infrastructure virtualization and provisioning. The next two sections in this chapter discuss those topics and how they add crucial elements to the infrastructure optimization process to round out the goal of overall infrastructure optimization.

Buliding N1 Grid Solutions Preparing, Architecting, and Implementing Service-Centric Data Centers
Buliding N1 Grid Solutions Preparing, Architecting, and Implementing Service-Centric Data Centers
Year: 2003
Pages: 144 © 2008-2017.
If you may any questions please contact us: