1.3 Evolution of operating systems

 < Free Open Study > 



1.3 Evolution of operating systems

A modern operating system is computer software, firmware, and possibly hardware that interact at a low level with the computer system's hardware components to manage the sharing of the computer's resources among various software applications. The goal of this piece of systems software is to allow for the fair sharing of these resources among any and all active jobs within the system. An operating system runs as the most privileged of software elements on the system and requires basic hardware support for interrupts and timers to effect control over executing programs.

Operating systems evolved over a long period of time, driven as much by the hardware available as the needs of the applications running on the machines. In the beginning, there were few tools available to enhance the usefulness of a computer to the general populace, and they were relegated to be used by a select few who could trudge through the translation of real problems into sequences of simple machine instructions. These machine instructions were at first in microcode (the lowest form of software) or assembly code. In either case there were no controls over what the coder did with the computer system. These early coders required great talent to be able to master the art of changing a problem such as missile guidance into the software required to carry it out. These early coders simply loaded the software into the machine at a specific memory location and indicated to the hardware to begin processing the job. The machine would continue processing this same job until the machine detected an error (such as an overflow) or there was a stop command issued to the machine. There were no automated means to switch from one job to another.

The first operating system problem tackled by systems programmers to change this situation was to develop a means to transition from one job to another processing job without the need to stop the machine, enter the new program, and start it up again, as was the case in the past. The monitor or batch operating system concept provided the solution to this early defined problem. These early systems offered means for operators to load several jobs at one time; the computer system then performed them in a sequential manner. As one job completed, the operating systems software would take over control of the machine's hardware, set it up for the next job, and then release control back to the new job, which then ran to completion. Although this was a step in the right direction, the expensive computer systems of the day were not being efficiently utilized. New devices were being developed to aid in input and output (early terminals) and storage (improved disk drives, tape units), but the control mechanisms to use them efficiently still were not there.

These new computer peripheral devices, which were coming into place in the 1970s, provided the impetus for systems designers to find ways to make them more fully utilized within the system. One of the biggest drivers was the input/output terminal. These demanded that the system provide mechanisms for the operators to input code and data and to request compilation, linking, loading, and running of their jobs as if they were running alone on the machine, when in reality there would be many users on the machine concurrently. The system management service developed to meet these demands was called the executive program.

The executive program provided policies and mechanisms for programs and devices such as terminals to run concurrently under control of the executive's watchful eye. The function was to control interaction so that devices did not interfere with each other in running their jobs on the machine. They still, however, pretty much ran one at a time on the machine. This crude operating system provided many of the rudimentary services expected from an operating system and became the vehicle upon which many innovations were developed.

Research carried out on these early executive programs led to supervisor programs, which took on more functions from the systems operators and coders. The supervisor programs provided rudimentary services for "swapping" of programs from primary memory and control over the CPU based on the concept of time slices. Following the success of these developments came the first true operating systems in the 1960s. Many of the services found in modern operating systems today have their roots in this early system.

Generically, an operating system provides the following services:

  1. Hardware management (interrupt handling, timer management)

  2. Interprocess synchronization and communications

  3. Process management

  4. Resource allocation (scheduling, dispatching)

  5. Storage management and access (I/O)

  6. Memory management

  7. File management

  8. Protection of system and user resources

An operating system begins with the management of a computer system's hardware. Hardware management requires the ability to set limits on the holding of resources and the ability to transfer control from an executing program back to the operating system. These functions are realized through the use of hardware timers and interrupt services. A hardware timer is a counter that can be set to a specific count (time period). When the time expires, an interrupt signal is released, which stops the processor, saves the processor's state (saves all active register contents, ALU registers, status registers, stack pointers, program counters, instruction registers, etc.), and turns control over to an interrupt service routine. The interrupt service routine examines the contents of predefined registers (e.g., the CPU status register or a predefined interrupt register) or sets memory locations and determines what operations are to be performed next. Typically, control is immediately turned over to the operating system's kernel for servicing of the interrupt.

The goals of these services and developments had one common thread: to make more efficient use of computing facilities. They were meant to provide convenient interfaces to users while hiding the details of the bare machine from them. The operating system provides for transparent use of computing resources, relieving users and operators from the burden of needing to know the particular system's configuration. The operating system also provided users and systems programmers protection from accidental or malicious destruction, theft, or unauthorized disclosure.

The most obvious accomplishment of an operating system is the hiding of the computing platform's details and the optimal use of resources. Users need not know what particular device they are using, only that they need one of a certain class of device (e.g., a tape or disk). This shields users from the problems of down components. If it were necessary to specify a particular device that was not available, work might not be able to go on. If the user can specify a class of device, any one of that type can meet the need, increasing the ability of the user to get the job done.

The systems programmers and the hardware and software researchers did not end their quest for perfection at this point. There were more areas to be looked at, and system problem areas needed to have solutions developed for them. Sharing of resources introduced its own set of problems. As systems became more usable, more uses were envisioned and implemented. Systems began to meet the raw processing capacity of the machines. Designers needed to find ways to get additional resources to improve processing cycles for user applications. Software developers looked to streamline computational complexity of the operating system, providing some relief. Hardware designers improved the computational capacity of the systems through improved architectures and instruction execution schemes. All such improvements, however, were only temporary.

The research and development community began to look at ways to improve performance within fixed or marginally improving processor performance. The problem is that no matter how much we improve the performance of a processor, it will still have a limited amount of available cycles for applications and required systems services. The initial concept looked at was not to grow single processing power, which is limited, but to instead add entire new processors. This concept was initially examined as part of architecture improvements in the 1980s. The multiple processors could each be set up to run their local operating systems, with added services to allow remote systems to request services (resources) from another machine that was not busy or not fully utilized. By using these systems calls, multiple processors running separate operating systems could be synchronized to perform a single, larger processing task in much less time. The effective improvement in performance, however, is not simply the multiplicative factor of the number of machines but some fraction of this computation. This is due to the added overhead to synchronize the operations of the loosely coupled systems.

These systems led to further research and experimentation. If loosely coupled machines could be collected and grouped together to perform larger functions, why couldn't they be grouped in a tightly bound fashion to perform large computational applications that could not be done on a single machine? These new systems were called "distributed processors." What distinguishes these classes of systems from their loosely coupled multiprocessor counterparts is the degree of cohesiveness the processors exhibit. The processor's operating system is a single global operating system, which is spread across the machines in a variety of ways. In one case the entire operating system can be replicated on each site, with individual processors only needing additional state information to indicate what their function is and what their state presently is in relation to the entire distributed systems state. The second configuration uses the concept of partitioning the operating systems components across the various sites of the distributed computer system. Each processor then has a specific function-for example, process scheduling or device access.

These new operating systems concepts are still being examined in the realm of research and have not as yet found their way into the mainstream systems. On the other hand, we have client/server processing, which uses a form of the multiprocessing operating systems to provide for remote access to resources. They differ, however, in not enforcing strict synchronization requirements on client/server processing. Many additional protocols have been developed to provide this form of processing, which is prevalent in most products one uses today for computing remotely over the Web.



 < Free Open Study > 



Computer Systems Performance Evaluation and Prediction
Computer Systems Performance Evaluation and Prediction
ISBN: 1555582605
EAN: 2147483647
Year: 2002
Pages: 136

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net