The Evolution of Data Communication

The Evolution of Data Communication

Data communication, or data networking, is the exchange of digital information between computers and other digital devices via telecommunications nodes and wired or wireless links. To understand the evolution of networking services, it is important to first understand the general computing architectures and traffic types, both of which have changed over time.

Data Communication Architectures

In the rather brief history of data networking, a variety of architectures have arisen, and each has had unique impacts on network variables. Table 6.1 shows a basic time line of the architectures that have prevailed during different periods. Each architecture has slightly different traffic characteristics, has slightly different requirements in terms of security and access control, and has presented a different volume and consistency of traffic to the network. With each new computing architecture, there has been a demand for new generations of network services.

Table 6.1. Time Line of Data Networking Architectures

Time

Architecture

1970s

Stand-alone mainframes

Early 1980s

Networked mainframes

Early 1980s

Stand-alone workstations

Early to late 1980s

Local area networking

Mid-1980s to mid-1990s

LAN internetworking

Mid-1990s

Internet commercialization

Mid- to late 1990s

Application-driven networks

Late 1990s

Remote-access workers

Early 2000s

Home area networking

Stand-alone Mainframes

The 1970s was the era of stand-alone mainframes. These were very hierarchical networks, where certain paths needed to be taken. It was a time of terminal-to-host connectivity. At the bottom of the heap we had smart terminals, and a group of these terminals would report to an upper-level manager, often referred to as a cluster controller.

The cluster controller was responsible for managing the traffic flows in and out of its underlying terminals and for scheduling resources upstream from those terminals. In turn, a number of cluster controllers would be managed by yet another level of manager, called the front-end processor, which served as the interface between the underlying communications network and the applications stored in the host. That front-end processor ultimately led to the host, where the users' applications resided. In this era, a given terminal could only have access to the host that was upstream from it. If you wanted to make use of applications that resided on another host, you either needed a different terminal or had the pleasure of working with a variety of cables under your desk, changing the connections as needed.

Networked Mainframes

A major change occurred in the early 1980s: People began networking the mainframes. This was called multidomain networking, and it enabled one terminal device on a desktop to access numerous hosts that were networked together.

Stand-alone Workstations

Also in the early 1980s, stand-alone workstations began to appear in the enterprise. This did not generally happen because the data processing department had decided that it would move to workstations; rather, it happened because technically astute users began to bring their own workstations into the firm, and then they would ask the data processing or MIS (Management Information Services) department to allow connectivity into the corporate resources from their workstations.

LANs

As independent workstations began to penetrate the corporate environment, we started to study how data was actually being used. We found that 80% of the information being used within a business was coming from within that location, and only 20% was being exchanged with other locations or other entities. This let businesses know that for the majority of their communications, they needed networks that had a limited geographical span, and hence evolved the local area network (LAN). LANs were defined as serving a business address a given building or at most a campus environment.

A shift began to occur in how the network needed to accommodate data. In the mainframe environment, with its single-terminal-to-host communications, traffic volumes were predictable. The traffic levels between a given terminal and its host were known, so it was possible to make some fairly adequate assumptions about the amount of capacity to provision between those two points. However, in the LAN environment, the traffic patterns were very unpredictable. For example, in a business with 100 PCs on one LAN and 50 PCs on another LAN, the level of traffic on each LAN might change throughout the day. Sometimes it was extremely high volume, sometimes there was nothing going on, and sometimes it was a steady, average stream. This unpredictability introduced a requirement for network services that could be flexible in how they addressed bandwidth requirements (that is, services that could introduce bandwidth on demand). Frame Relay, which is discussed in Chapter 7, "Wide Area Networking," is one such network service. Frame Relay provides the capability to actually provide more bandwidth than you subscribe to, but because the traffic patterns fluctuate, the overall usage should balance out at the end of the day.

So throughout the mid- to late 1980s, the major design emphasis was on deploying LANs, which help to speed up corporate communications, to make the workforce more productive, and to reduce costs associated with the sharing of software and hardware resources.

LAN Internetworking

As LANs were popping up in enterprises all over, it became necessary to come up with a tool for internetworking them. Otherwise, islands of knowledge existed on a given LAN, but these islands couldn't communicate with other departments, clusters, or divisions located elsewhere in the enterprise. LAN internetworking therefore took place throughout the late 1980s and early to mid-1990s, bringing with it the evolution, introduction, and rapid penetration of interconnection devices such as hubs, bridges, routers, and brouters, whose purpose is to internetwork between separate networks.

Internet Commercialization

In the mid-1990s, yet another alternative for data networking came about, with the commercialization of the Internet. Before about 1995, the Internet was mainly available to the academic, research, and government communities. Because it presented a very cost-effective means for data networking, particularly with text-based, bursty data flows, it held a significant appeal for the academic and research community. However, until the introduction of the World Wide Web, the Internet remained largely an academic platform. The intuitive graphical interfaces and navigational controls of the WWW made it of interest to those without UNIX skills. The Internet was particularly useful for applications such as e-mail, for which there was finally one standard that was open enough to enable messaging exchanges between various businesses that used different systems.

Application-Driven Networks

Toward the mid- to late 1990s, we began to see the development of advanced applications, such as videoconferencing, collaboration, multimedia, and media conferencing. This caused another shift in how we thought about deploying networks. In the days of hierarchical networks, decisions about network resources were based on how many devices there were and how far they were away from one another. But when advanced applications which had great capacity demands and could not tolerate delays or congestion began to be developed, these applications began to dictate the type of network needed. Therefore, the architecture shifted from being device driven to being application driven.

Remote-Access Workers

In the late 1990s, with the downsizing of information technology (IT) departments, both in terms of physical size and cost, it became much easier to deploy IT resources to the worker than to require the worker to come to the IT resources. Remote access, or teleworking, become a frequently used personnel approach that had advantages in terms of enhanced employee productivity, better morale, and savings in transportation costs. Also, as many large corporations downsized, workers became self-employed and worked from small offices/home offices. This architecture featuring remote-access workers focused on providing appropriate data networking capabilities to people in their homes, in hotels, in airports, and in any other place where they might need to access the network. Facilities were designed specifically to authenticate and authorize remote users and to allow them access to corporate LANs and their underlying resources.

HANs

Today, individuals are increasingly using their residences as places to carry out professional functions, and they increasingly need to network intelligent devices that are used for work, educational, or leisure activities. Therefore, home area networks (HANs) are becoming a new network domain that needs to be addressed; these days we don't need to think about just the last mile, but about the last 328 feet (100 meters)!

Data Communication Traffic

As the architecture of data networks has changed, so have the applications people use, and as applications have changed, so has the traffic on the network. This section talks about some of the most commonly used applications today and how much bandwidth they need, how sensitive they are to delay, where the error control needs to be performed, and how well they can tolerate loss.

The most pervasive, frequently used application is e-mail. Today, it's possible to append an entire family vacation photo album to an e-mail message, and this massive file would require a lot of bandwidth. But e-mail in its generic text-based form is a low-bandwidth application that is delay insensitive. If an e-mail message gets trapped somewhere in the Net for several seconds, its understandability will not be affected because by the time you view it, it will have all been put on the server where your e-mail resides, waiting for you to pick it up. Another issue you have to consider with e-mail is error control. Networks today rarely perform error control because it slows down the traffic too much, so error control and recovery need to be handled at the endpoints. Instead, internetworking protocols deployed at the end node, such as Transmission Control Protocol (TCP), detect errors and request retransmissions to fix them.

Another prevalent data networking application is transaction processing. Examples of transaction processing include a store getting approval for a credit card purchase and a police officer checking a database for your driver's license number to see whether you have any outstanding tickets. Transaction processing is characterized by many short inputs and short outputs, which means it is generally a fairly low-bandwidth application, assuming that it involves text-based messages. Remember that if you add images or video, the bandwidth requirements grow substantially. Thus, if a police officer downloads a photo from your license, the bandwidth required will rise. Transaction processing is very delay sensitive because with transactions you generally have a person waiting for something to be completed (for example, for a reservation to be made, for a sales transaction to be approved, for a seat to be assigned by an airline). Users want subsecond response time, so with transaction processing, delays are very important, and increased traffic contributes to delay. For example, say you're at an airport and your flight is canceled. Everyone queues up to get on another flight. The agents work as quickly as they can, but because of the increased level of traffic as more people all try to get on the one available flight, everything backs up, and you have to wait a long time for a response to get back to you. With transaction processing, you have to be aware of delay, and error control is the responsibility of the endpoints. Transaction processing is fairly tolerant of losses because the applications ensure that all the elements and records associated with a particular transaction have been properly sent and received before committing the transaction to the underlying database.

Another type of application is file transfer, which involves getting a large bulk of data moved from one computer to another. File transfer is generally a high-bandwidth application because it deals with a bulk of data. File transfer is machine-to-machine communication, and the machines can work around delay factors, as long as they're not trying to perform a real-time function based on the information being delivered. So file transfer is a passive activity it's not driving process control and it can tolerate delay. File transfer can also tolerate losses. With file transfer, error control can be performed at the endpoints.

Two other important applications are interactive computing and information retrieval. Here bandwidth is dependent on the objects that you are retrieving: If it's text, it's low bandwidth; if it's pictures, good luck in today's environment. Interactive computing and information retrieval are delay sensitive when it comes to downloads, so higher speeds are preferred. Real-time voice is a low-bandwidth application but is extremely delay sensitive. Real-time audio and video require medium, high, and even very high bandwidth, and are extremely delay sensitive. Multimedia traffic and interactive services require very high bandwidth, and they are extremely delay sensitive and extremely loss sensitive.

Anything that is text-based such as e-mail, transaction processing, file transfer, and even the ability to access a database for text-based information is fairly tolerant of losses. But in real-time traffic, such as voice, audio, or video, losses cause severe degradation in the application. Going forward for the new generation of networks, the ITU suggests that a network should have no less than 1% packet loss (see www.itu.org), and that's far from the case in today's networks. The public Internet experiences something like 40% packet loss during some times of day.

 



Telecommunications Essentials
Telecommunications Essentials: The Complete Global Source for Communications Fundamentals, Data Networking and the Internet, and Next-Generation Networks
ISBN: 0201760320
EAN: 2147483647
Year: 2005
Pages: 84

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net