Previous Table of Contents Next

Mainframe/Host Network Model

The first networks can be traced back to the standard mainframe/host model, which was pioneered by IBM in the early 1960s. This centralized computing was the topology of choice during this era of networking. The protocol running in this environment was known as Simple Network Architecture (SNA). It is a time-sensitive broadcast intensive protocol that is based hierarchically. SNA required large, powerful mainframes to properly operate within its standards.

The mainframe/host type of topology provided mission-critical applications that stored data on the mainframe. Terminals, known as logical units or hosts, provided a common interface to the user for running the applications and accessing the data.

The terminals in this model were considered dumb in the sense that they had no capability to process data. Equipment known as the cluster controllers formatted the screens and collected data for the terminals. They were known as cluster controllers because each one had a cluster of terminals connected to it. These controllers were in turn connected to communication controllers that handled the input and output processing needed by the terminals. Then the communication controllers in turn were connected to the mainframe computer that housed the company s applications and processors. Figure 1-1 illustrates typical mainframe architecture.

Figure 1-1  Mainframe-centered network with remote terminals.

On a logical level, the mainframe model has many drawbacks when compared to the networks and applications of today. Its application development was a slow and ponderous process and the cost of computing power was very high; however, the mainframe model did have some benefits as well:

  Mainframe components were networked together with a single protocol, typically SNA
  The largely text-based traffic consumed little bandwidth
  Tight security with a single point of control
  Hierarchical design had highly predictable traffic flows

Client/Server Model

During the 1980s, the computing world was rocked by the introduction of the personal computer (PC). This intelligent terminal or workstation drove an industry-wide move towards intelligent workstations. This move had wide ramifications that continue to be felt to this day.

The introduction of the PC propelled the evolution of the mainframe model toward LANs. There were already quite a few token-ring networks deployed in support of mainframes, but they did not yet have the large number of PCs attached to them as they have today. It was during this time that mainframes and client/servers melded together as the PC slowly replaced mainframe systems. The PC s capability to be both a terminal emulator and an intelligent workstation client blurred the lines between host-based systems and client servers, because applications and data were stored on a dedicated workstation that became known as a server. This melding also resulted in early routers known as gateways that provided the connectivity between various types of clusters and the evolving LANs back to the mainframes. Figure 1-2 shows a typical client/server-mainframe hybrid network.

Figure 1-2  Client/server-mainframe hybrid network.

The importance of digital-based WANs became more prevalent at this time. This was also assisted by the PC s capability to perform protocol-based calculations as required for different physical media types.

In the client/server model, computing power is less expensive and the application development cycles are shorter; however, this architecture results in multi-protocol traffic and unpredictable traffic flows. This is a drawback of the decentralized control of the client/server model with its dispersed architecture. Although the traffic can be uneven and bursty, it is still somewhat predictable due to the hierarchical structure that still exists, in which clients communicate primarily with the server.

As this model developed and evolved through the 1980s, it drove the development of technology in both the LAN and WAN arenas. This resulting evolution of networking models has resulted in the corporate intranets of today.

Typical Corporate Intranets

The typical intranet model of today has toppled traditional hierarchies of previous network models. The rapid changes in networking during the 1990s are astounding and far-reaching, as indicated by the following factors:

  Distributed processing enables many different intelligent devices to work together so that they meet and, in many cases, exceed the computing power of mainframes.
  Corporate legacy systems are downsized as movement continues away from mainframe-based computing.
  Increased demands for more bandwidth have created many emerging technologies that have pushed networks to the limit.
  Intelligent routing protocols and equipment intelligently and dynamically build routing databases, reducing design and maintenance work.
  Internetworking topologies have evolved as routers and bridges are used to network more and more mini and personal computers.
  Protocol interoperability connecting different LAN and WAN architectures together has increased standards between protocols. Through the increasingly prevalent melding of the two network types, the applicable protocols become more and more intertwined.
  The Telecommunications Act of 1996, known as Public Law 104-104, provided opportunities for telecommunications suppliers to increase bandwidth and competition.

All of these factors have resulted in and raised many issues that must be considered by everyone involved in networking. Foremost is the issue of accelerated network growth. As sweeping changes have become standard, everyone must learn how to react and manage this growth.

Previous Table of Contents Next

OSPF Network Design Solutions
OSPF Network Design Solutions
ISBN: 1578700469
EAN: 2147483647
Year: 1998
Pages: 200
Authors: Tom Thomas

Similar book on Amazon © 2008-2017.
If you may any questions please contact us: