We can make a comparison of autonomic computing architectures that will emerge and compare them to the existing IT architectures that we are familiar with today. For this comparison, we will categorize today's architectures using "As Is." For future autonomic architectures, we will use "To Be."
We start with the basic elements of architectures, and Table 8.2 shows the results:
Today, detailed planning, discussion, and preparation are required to configure and install any major software package or application. A detailed work plan will be
and agreed on, and resources tasked with checklists. Work may start sometime before the required installation—perhaps several weeks or months before, if the configuration is large and complex. Several types of configuration/installation may be required—for example, a new release of the software or perhaps service
with patches and fixes that need to be applied.
Detailed testing is required before the installation can be released. Results need to be reviewed and signed off on. Cross-impacts with other software packages will need to be
and reviewed to ensure compatibility and normal operations. System performance is another factor in the equation that will need to be
and reviewed to ensure that the new configuration runs at the previous levels.
An autonomic computing architecture will configure and reconfigure itself automatically under varying system conditions. These conditions may be unpredictable and unexpected. The control will come under the SLA that will be defined in precise detail. The self-configuration of the autonomic computing architecture will assess the risks involved. It may also contract for outside services—in an on-demand environment, if needed.
Problem Solving—As Is
Solving system problems and errors is an intense and
process that involves tracing events, logs, and software to obtain the root cause of the problem. It is a highly complex process that requires
analytical ability. It is a high tech detective game. There may be substantial pressure on IT people to solve a system problem quickly so that systems can be returned to operational status. When the problem is
or identified, a fix has to be put in place and tested. Again this takes more time and effort.
An autonomic computing architecture will be able to recover from events that cause system failures or operational malfunctions. To achieve this, it will be required to understand the problems and their solutions or fixes. It will learn when new problems are detected and
. A systematic process can achieve this.
Identify the problem.
Determine if there are alternative compatible solutions.
Provide services as needed and on demand.
Install optimal substitutes.
In the future, more sophisticated autonomic computing architectures will anticipate failures and respond
, just as the human autonomic nervous system reacts when faced with a threat.
Software tools exist today to monitor systems and maintain optimal performance. These tools are sophisticated, with embedded algorithms and mathematical programming solutions, such as linear or integer programming, as well as modeling tools. To use these tools, requires substantial programming background and training. The average Java or COBOL programmer does not have it.
Autonomic computing architectures will automatically optimize all elements in the system. It will review each element according to a schedule and frequency, and if any variation is detected, a solution will be implemented. Depending on the type of tuning needed, the element will be assigned a solution or additional resources—for example, if a Web site's traffic suddenly doubles to excessive loads. The self-optimizing feature will trigger an action to boost the service. The system may check for prices and availability of purchasing and initializing extra services (memory and storage) to continue operations. This will be previously defined in the defined in the SLA.
Resilient technology is crucial to building secure computing environments, but technology alone cannot completely answer all threats as they
. Well-designed products, established and effective processes, and knowledgeable, well-trained operational
are all required to build and
an environment that provides high levels of security and functionality. IT customers expect systems that are resilient to attack and that protect the confidentiality, integrity, and availability of the system's data at all times. Customers also are able to control data about
, and those using such data faithfully
to fair information principles.
For businesses to
competitive, efficient and secure networked computing is more important than ever. Autonomic computing architectures will protect against defined and known threats, viruses, worms, and internal threats as well. Self-protection will detect the threat and recover from faults that might cause some
of it to malfunction. It will extend those advantages to the system and any connected
, customers, and suppliers.
Table 8.3 summarizes the aspects of
Table 8.3. A Summary of Current Management versus the Autonomic Self-Management of the Future
Corporate data centers have multiple
and platforms. Installing, configuring, and integrating systems is time-consuming and error prone
Automated configuration of
high-level policies. Rest of system
automatically and seamlessly
Systems have hundreds of manually set nonlinear tuning parameters, and their number
with each release
Components and systems continually seek opportunities to improve their own performance and efficiency
Problem determination in large, complex systems can take a team of programmers weeks
System automatically detects, diagnoses, and
localized software and hardware problems
Detection of and recovery from attacks and cascading failures is manual
System automatically defends against malicious attacks or cascading failures. It uses early warning to anticipate and prevent