INTRODUCTION

Prev don't be afraid of buying books Next

In March 2001, IBM Senior Vice President and Director of Research Dr. Paul Horn spoke about the importance and direction of autonomic computing before the National Academy of Engineering conference at Harvard University. He had a very direct message:

"The information technology industry loves to prove the impossible possible. We obliterate barriers and set records with astonishing regularity. But now we face a problem springing from the very core of our success—and too few of us are focused on solving it. More that any other IT problem, this one—if it remains unsolved—will actually prevent us from moving to the next era of computing. The obstacle is complexity . . . . Dealing with the single most important challenge facing the IT industry."[1]

This was the first time the world was told of IBM's autonomic computing program. Shortly after, Mr. Irving Wladawsky Berger, the IBM Vice President of Strategy and Technology for the IBM Server Group, introduced the Server's Group project (known then by the internal IBM codename eLiza). He stated goal was to provide "self-managing systems." This was expanded to many other divisions and business units within IBM. It was and remains a company-wide project. Project eLiza would eventually become known as the autonomic computing project. Thus began the autonomic computing journey within IBM. Dr. Paul Horn's presentation was released as a manifesto, and as many as 75,000 copies were reportedly distributed to customers, press, media, and researchers worldwide. In the manifesto, Paul Horn invites customers, competitors, and colleagues alike to accept the "Grand Challenge of building computing systems that regulate themselves."

The term Autonomic Computing derives from the human autonomic nervous system (ANS). The same way we take for granted the human body's management of breathing, digestion, and fending off germs and viruses, shown in Figure 1.1, we will take for granted the computer's ability to manage, repair, and protect itself. That process has begun with autonomic computing.

Figure 1.1. Defining autonomic—self-configuring, self-optimizing, self-healing, self-protecting.

graphics/01fig01.jpg




We can learn much from how the human body manages itself and apply those same techniques to software to create system management functions for commercial corporations. This is the fundamental purpose of autonomic computing.

The grand challenge of autonomic computing is not just about one company. It is bigger than IBM. To be successful, it must be a joint effort of the entire technology industry, involving software and hardware vendors of all sizes. In addition, it will require the involvement of the academic and university community, as well as customers, who will be the ultimate users of this technology.

The Information Technology (IT) industry today faces the biggest threat to its continuing success. That threat is complexity. The systems that we have developed and installed are becoming so complex that they verge on becoming unmanageable. Complexity is everywhere—in architectures, networks, programming languages, and applications and software packages. The IT industry is a victim of its own success. The complexity now facing most CIOs is a direct result of internal and external pressures—management, economic, market—to make everything cheaper, faster, and smaller. Now IT staff are succumbing to the effects of this complexity. Data center staff, network administrators, and IT support staff are wrestling daily with major problems of incompatibility and software failures, and they must deal with these problems manually. The situation is further convoluted by computers from multiple vendors—each with their own proprietary software, data protocols, transmission standards, and so on. This presents challenges and makes it difficult for IT personnel to manage an environment containing diverse and heterogeneous infrastructures. Often, IT personnel can't integrate different systems. Even when they can, IT personnel still face difficulties adding new systems to existing environments and then configuring and managing them. Thus, most organizations now spend about three-fourths of their application deployment time and costs on the integration of different systems. They must also deliver services across geographical and business boundaries. This means organizations, in addition to managing heterogeneous vendor and technical environments, also have to put extensive efforts into customizing technologies to meet the requirements of different IT policies, while delivering unique services to customers. This complexity keeps the cost of managing IT infrastructure high. At times, the complexity causes overruns in IT cost and delays in implementation. This, in turn, translates into losses in productivity and missed business opportunities.

The second complexity factor is the increased size of IT infrastructures. The accelerating pace by which myriad devices are being added to almost all IT networks (especially the Internet) further complicates the already sophisticated technological environment. Rapid advances in technology have led to significant improvements in price/performance ratios, thus making technology accessible to many. Today, corporations are no longer dealing with one person accessing one application on a local PC or on a network server. Instead, organizations are seeing thousands, and eventually even millions, of users accessing the same service hosted on one or more servers, potentially at the same time. Relying on human intervention to manage this complexity bears a steadily increasing risk of as the scale and level of complexity extend beyond the comprehension of even highly skilled IT personnel.

The third complexity factor is the escalating costs of systems. The increasing complexity of integrated systems makes the job of maintaining and fixing systems more challenging than ever. In today's competitive world, where customers expect uninterrupted services, even a short breakdown can cost organizations millions of dollars in lost business. In fact, it has been reported that one-third to one-half of typical IT budgets are spent on preventing or recovering from crashes.

The fourth complexity factor is the shortage of skilled labor. Workers who have the knowledge to manage complex IT infrastructures are expensive and remain in short supply, even in today's depressed economy. According to a study by researchers at the University of Berkeley, depending on the type of system, labor costs could surpass infrastructure costs by a factor of 3 to 18. Therefore, the strategy of relying on human intervention to manage IT infrastructure might not be a favorable strategy in the long run, as there might come a point where existing skilled labor and manpower will not be enough to supply e-business on demand. Complexity is discussed in more detail in Chapter 2.

The challenge is to simplify IT. The sheer size and complexity of current computing environments has hindered efforts to integrate systems, databases, applications, and business processes, and has substantially decreased management and operational efficiency. Making IT infrastructures flexible enough to respond quickly and effectively to dynamic customer requirements, marketplace shifts, and competitive demands remains a challenge. Autonomic computing can meet that challenge.

Amazon


Autonomic Computing
Autonomic Computing
ISBN: 013144025X
EAN: 2147483647
Year: 2004
Pages: 254
Authors: Richard Murch

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net