1.1 Progress of Management

1.1.1 Why Invest in Management?

The basic motivation for corporations to invest in enterprise management systems is growth. This growth creates pain points that can be eased by network and systems management. Some of the more common scenarios that cause pain:

  • When the IT resources become too numerous for computer operations staff to track with internally developed tools.

  • When the IT resources become too distributed to control from a single systems console.

  • When the IT resources are of diverse types causing console proliferation in the operations center, which cannot be staffed.

  • When the IT resources need to ensure that they are highly available.

All of these scenarios result in workloads that ultimately overwhelm operations staff, like the poor fellow in Figure 1.1, causing stress and decreasing their ability to do their job of keeping the IT infrastructure available.

Figure 1.1. Harried Operators Can't Possibly Keep Up

graphics/01fig01.gif

As an example, these pain points are all felt by the leading-edge corporations building business-to-customer and business-to-business applications, also known as e-business, over the Internet. These applications are distributed not only geographically , but also across corporations. Each partner may be running his side on different platforms, servers, and networks. Each partner can manage only part of the application. Each partner may be using several consoles to manage the network, server, middleware, and application. And finally, these applications must be available 24 hours a day, 7 days a week, or else both partners may lose revenue by the minute.

1.1.2 The Natural Evolution of Management

The first businesses invested in computer systems to make themselves more competitive. These first systems were expensive, to say the least. In order to maximize the return on their investment, corporations had to make sure that all of the mainframe's resources were in use around the clock. This meant that these systems had to be highly available. At first, because a corporation had only a few such systems, a computer operations and administration staff were able to monitor and operate them with sufficient efficiency.

Then, as the number of mainframes in use by a single corporation increased, the systems also became more complex. It became difficult for the staff to efficiently distribute the work and monitor the systems. The mainframes needed management systems in order to be highly available and efficiently used. The requirements of highly available systems resulted in management systems that were tightly coupled with the target operating systems.

These management systems were responsible for providing interfaces and status on every detail of the system's lifecycle. The systems monitored themselves and notified operators when they required intervention. They provided operators ways to configure the system before and during execution, to operate the systems during execution, to monitor the status of the system, and to recover from failures. The operations staff was responsible for starting the system, managing the system's workload and throughput, backing up the system, stopping the system, and maintaining the system. Because the number of mainframes deployed within a company was still fairly small, having one management system per mainframe and an operations staff to monitor it was a reasonable way to provide a highly available, efficiently used system.

Two forces upset this short-lived balance. Large enterprises began deploying large numbers of mainframes that were dispersed across the company's locations. At the same time, applications began to emerge that could be used by many employees simultaneously . Inexpensive 3270 [2] terminals were deployed to "bring the computer to the user ." Networks were created to connect mainframes to each other and their far-flung users. Operations staffs were now responsible for monitoring and managing the network along with the systems.

The volume and distribution of the systems to be managed stressed operations staff. Many companies split the operations staff's responsibilities and organization between network operations and system operations. The resulting network control centers (NCCs) and operations control centers (OCCs) became distinct organizationally and geographically. Nonetheless, in the mid-1980s it became obvious that although all of the management consoles necessary to manage the complexes might fit within a large computer operations center, the number of human operators required to interact with these consoles could not fit into that same room. Likewise, the sheer volume of systems, networks, and applications to be monitored constantly by the operators guaranteed that problems would be missed and availability would be compromised. The split into NCC and OCC often made finding the root cause of a service outage an exercise in finger-pointing (see Figure 1.2), which merely exacerbated the frequency and length of the outages.

Figure 1.2. Problem Analysis Triggers Finger-Pointing

graphics/01fig02.gif

In answer to the need for more reliable management with fewer operators, new enterprise systems management products were developed by companies like IBM, [3] Candle, [4] and Computer Associates. [5] These products were written to manage a particular operating system or network, but they were not as tightly coupled as the original management systems. At the same time, as the networks grew larger and spanned greater distances, the demand for network management products rose sharply.

Up to this point, enterprise computing systems were fairly homogeneous. A single vendor's hardware, operating system, and networking hardware would be deployed and managed by a single management system. The introduction of UNIX [6] -based systems from Sun Microsystems [7] and Hewlett-Packard [8] allowed affordable alternatives to IBM's expensive mainframes. IBM introduced its own line of UNIX systems to compete . Smaller businesses could now afford to use computing to make business more efficient. Likewise, smaller branches and independent departments of large enterprises could afford to own and operate their own computers.

This adoption of computing by "grassroots" businesses that did not have dedicated, professional IT staff caused management applications to focus on ease of use. The results were new, non-management-oriented, improvements:

  • User interfaces . Graphical, more intuitive user interfaces that were easier to use for the non-IT professional, who had another real job to do

  • Automation . Automated recovery by the management system of a failed or badly performing resource

  • Recommendation . Problem determination and correction recommendations that were made available to users within the management system

  • Administration . Ease of installation, administration, and maintenance of the management system itself

The isolated departmental computer systems didn't stay isolated for very long. Within enterprises it became necessary to connect these systems to each other and the mainframes that were the backbone of the enterprise. This connection made networks a critical aspect of the IT infrastructure to be managed. Not only were there more systems to be managed, but these were simple systems and not always designed and built to be manageable. Now the enterprise IT staffs were facing the challenge of managing large volumes of systems using many different hardware platforms, operating systems, and network technologies. This variety exponentially increased the complexity of keeping the systems available, highly utilized, connected, and with reasonable response time.

The advent of TCP/IP [9] networking made connecting large numbers of these disparate systems and their clients much easier. This new ease of connection triggered the development and deployment of distributed computing environments. Businesses began deploying applications in these distributed computing environments. Applications were no longer focused on one system, but across many systems. They were also no longer dependent on one well-managed system, but a whole slew of reasonably managed systems and the network between them. Applications in this environment, as well as the distributed-computing environments across which they executed, became much more complex. It was absolutely critical that this new type of extremely complex environment be managed to ensure high availability and utilization.

Operations staffs now had to learn to manage many different types of systems. This diversity meant that there was a desperate need for external management systems that could manage many different systems from many different vendors . These managers not only needed to do what their predecessors did ”start, stop, monitor, and control the systems and network ”but they had the added requirement of normalizing all of these disparate systems so that operations staff were protected from the incessant learning curve. Tivoli Systems, [10] Computer Associates, and BMC Software [11] are a few of the companies that have stepped up to supply management products for these challenges.



Java and JMX. Building Manageable Systems
Javaв„ў and JMX: Building Manageable Systems
ISBN: 0672324083
EAN: 2147483647
Year: 2000
Pages: 115

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net