The Evolution of Data Communications

Data communication, or data networking, is the exchange of digital information between computers and other digital devices via telecommunications nodes and wired or wireless links. To understand the evolution of networking services, it is important to first understand the general computing architectures and traffic types, both of which have changed over time.

Data Communications Architectures

In the rather brief history of data networking, a variety of architectures have arisen, and each has had unique impacts on network characteristics. Table 5.1 shows a basic time line of the architectures that have prevailed during different periods. Each architecture has slightly different traffic characteristics and requirements in terms of security and access control, and each has presented a different volume and consistency of traffic to the network. As described in the following sections, each new computing architecture has created a demand for new generations of network services.

Table 5.1. Time Line of Data Networking Architectures

Time

Architecture

1970s

Standalone mainframes

Early 1980s

Networked mainframes

Early 1980s

Standalone workstations

Early to late 1980s

Local area networking

Mid-1980s to mid-1990s

LAN internetworking

Mid-1990s

Internet commercialization

Mid- to late 1990s

Application-driven networks

Late 1990s

Remote-access workers

Early 2000s

Home area networking

Mid-2000s

Personal area networks and the Internet as corporate backbone

 

Standalone Mainframes

The 1970s was the era of standalone mainframes. These were very hierarchical networks, where certain paths needed to be taken. It was a time of terminal-to-host connectivity. At the bottom of the heap were smart terminals; a group of these terminals would report to an upper-level manager, often referred to as a cluster controller. The cluster controller was responsible for managing the traffic flows in and out of its underlying terminals and for scheduling resources upstream from those terminals. In turn, a number of cluster controllers would be managed by yet another level of manager, called the front-end processor, which served as the interface between the underlying communications network and the applications stored in the host. That front-end processor ultimately led to the host, where the users' applications resided.

In that era, a given terminal could have access only to the host upstream from it. To make use of applications that resided on another host, a user either needed a different terminal or had the pleasure of working with a variety of cables under his or her desk, changing the connections as needed.

Networked Mainframes

A major change occurred in the early 1980s: People began networking the mainframes. This was called multidomain networking, and it enabled one terminal device on a desktop to access numerous hosts that were networked together.

Standalone Workstations

Also in the early 1980s, standalone workstations began to appear in the enterprise. This did not generally happen because the data-processing department had decided that it would move to workstations; rather, it happened because technically astute users began to bring their own workstations into the firm, and then they would ask the data-processing or management information services (MIS) department to allow connectivity into the corporate resources from their workstations, which was generally accommodated via dialup modems or X.25.

LANs

As independent workstations began to penetrate the corporate environment, people started to study how data was actually being used. They found that 80% of the information used in a business was coming from within that location and only 20% was exchanged with other locations or other entities. This let businesses know that for the majority of their communications, they needed networks that had limited geographical span, and hence evolved the local area network (LAN). LANs were defined as serving a business addressa given building or at most a campus environment.

A shift began to occur in how the network needed to accommodate data. In the mainframe environment, with its single-terminal-to-host communications, traffic volumes were predictable. The traffic levels between a given terminal and its host were known, so it was possible to make some fairly adequate assumptions about the amount of capacity to provision between those two points. However, in the LAN environment, the traffic patterns were very unpredictable. For example, in a business with 100 PCs on one LAN and 50 PCs on another LAN, the level of traffic on each LAN might change throughout the day. Sometimes it was extremely high volume, sometimes there was nothing going on, and sometimes it was a steady, average stream. This unpredictability introduced a requirement for network services that could be flexible in how they addressed bandwidth requirements (i.e., services that could introduce bandwidth on demand). Frame Relay, which is discussed in Chapter 7, "Wide Area Networking," is one such network service. Frame Relay has the capability to provide more bandwidth than the user subscribes to, but because the traffic patterns fluctuate, the overall usage should balance out at the end of the day.

Throughout the mid- to late 1980s, the major design emphasis was on deploying LANs, which help to speed up corporate communications, to make the workforce more productive, and to reduce costs associated with sharing software and hardware resources.

LAN Internetworking

As LANs were popping up in enterprises all over, it became necessary to come up with a tool for internetworking them. Otherwise, islands of knowledge existed on a given LAN, but those islands couldn't communicate with other departments, clusters, or divisions located elsewhere in the enterprise. LAN internetworking therefore took place throughout the late 1980s and early to mid-1990s, bringing with it the evolution, introduction, and rapid penetration of interconnection devices such as hubs, bridges, routers, and brouters, whose purpose is to internetwork between separate networks.

Internet Commercialization

In the mid-1990s, yet another alternative for data networking came about with the commercialization of the Internet. Before about 1995, the Internet was mainly available to the academic, research, and government communities. Because it presented a very cost-effective means for data networking, particularly with text-based, bursty data flows, it held a significant appeal for the academic and research community. However, until the introduction of the World Wide Web, the Internet remained largely an academic platform. The intuitive graphical interfaces and navigational controls of the WWW made it of interest to those without UNIX skillsthat is, anyone with a PC running some version of Windowsand hastened the demise of just about every other form of Internet communications, including Archie, Gopher, WAIS, and Veronica. The Internet was particularly useful for applications such as e-mail, for which there was finally one standard that was open enough to enable messaging exchanges between various businesses that used different systems.

Application-Driven Networks

The mid- to late 1990s began to see the development of advanced, bandwidth-hungry applications, such as videoconferencing, collaboration, multimedia, and media conferencing. This caused another shift in how people thought about deploying networks. In the days of hierarchical networks, decisions about network resources were based on the number of devices and their distance from one another. But when advanced applicationswhich had great capacity demands and could not tolerate delays or congestionbegan to be developed, these applications began to dictate the type of network needed. Therefore, the architecture shifted from being device driven to being application driven.

Remote-Access Workers

In the late 1990s, with the downsizing of IT departments, both in terms of physical size and cost, it became much easier to deploy IT resources to the worker than to require the worker to come to the IT resources. Remote access, or teleworking, became a frequently used personnel approach that had advantages in terms of enhanced employee productivity, better morale, and savings in transportation costs. Also, as many large corporations downsized, workers became self-employed and worked from small offices or home offices. This architecture featuring remote-access workers focused on providing appropriate data networking capabilities to people in their homes, in hotels, in airports, and in any other place where they might need to access the network. Facilities were designed specifically to authenticate and authorize remote users and to allow them access to corporate LANs and their underlying resources.

HANs

Today, individuals are increasingly using their residences as places to carry out professional functions, and they need to network intelligent devices used for work, educational, or leisure activities. Therefore, home area networks (HANs) are becoming a new network domain that needs to be addressed; these days, we don't need to think about just the last mile, but about the last 328 feet (100 m)! Of course, what this really means is that we are bringing LAN technology into the home, with the likes of Wi-Fi being extremely popular at this time. (HANs are discussed in more detail in Chapter 12, "Broadband Access Alternatives.")

PANs

A personal area network (PAN) is a network that serves a single person or small workgroup and is characterized by limited distance, limited throughput, and low volume. PANs are used to transfer data between a laptop or PDA and a desktop machine or server and a printer. They usually support virtual docking stations, peripheral sharing, and ad hoc infrared links. An increasing number of machine-to-machine (m2m) applications are emerging, as are applications involving wearables and even implants; their key benefits cannot be realized without PANs. In the case of wearables and implants, the PAN exists on, or even in, the person. In fact, when talking about wearables, some refer to fabric area networks (FANs), in which the network is embedded in the fabric a person wears.

The Internet as Corporate Backbone

Another trend, just beginning to emerge, is the disappearance of the corporate LAN. In areas of the world where bandwidth is plentiful and cheap, some forward-thinking organizations have begun shrinking their LANs and relying on the Internet to play the role of corporate backbone. These companies have migrated many of their applications to Web-based services housed in (often outsourced) data centers. The applications are owned and maintained by the corporation; all access is via Internet-connected Web browser and is authenticated against a corporate directory server. These organizations no longer face the burden and expense of maintaining complicated corporate networks sprawling with various "extranets" and multilevel demilitarized zones (DMZs). The rise of high-speed ubiquitous Internet connections and reliable portable authentication has finally made such "deperimeterization" possible.

Data Communications Traffic

As the architecture of data networks has changed, so have the applications people use, and as applications have changed, so has the traffic on the network. This section talks about some of the most commonly used applications today and how much bandwidth they need, how sensitive they are to delay, where error control needs to be performed, and how well the applications can tolerate loss.

The most pervasive, frequently used applications are Web surfing and e-mail. Various forms of Web applications have dramatically different network requirements. With standard text-based exchanges (i.e., the downloading of largely text-based pages), Web surfing is not a highly challenging application. But as mentioned in the Introduction and emphasized throughout Part III, "The New Generation of Networks," the introduction of applications that include images, animation, real-time voice, real-time video, streaming media, and interactivity creates the need for greater capabilities in the network. This includes the need for more bandwidth to support the demanding interactive audio/video realm and mechanisms that address quality of service (QoS), which enables the control of priorities, latencies (delays), and packet losses. As discussed in Part III and Part IV, "Wireless Communications," optical and wireless broadband technologies are delivering more and more bandwidth, and QoS techniques are increasingly finding their way into networks.

Today, it's possible to append an entire family vacation photo album to an e-mail message, and such a massive file requires a lot of bandwidth. But e-mail in its generic text-based form is a low-bandwidth application that is delay insensitive. If an e-mail message gets trapped somewhere in the Net for several seconds, its understandability will not be affected because by the time you view it, it will have all been put on the server where your e-mail resides, waiting for you to pick it up. Another important issue with e-mail is error control. Networks today rarely perform error control because it slows down the traffic too much, so error control and recovery need to be handled at the endpoints. Instead, internetworking protocols deployed at the end node, such as TCP, detect errors and request retransmissions to fix them.

Another prevalent data networking application is transaction processing. Examples of transaction processing include a store getting approval for a credit card purchase and a police officer checking a database for your driver's license number to see whether you have any outstanding tickets. Transaction processing is characterized by many short inputs and short outputs, which means it is generally a fairly low-bandwidth application, assuming that it involves text-based messages. Remember that if you add images or video, the bandwidth requirements grow substantially. Thus, if a police officer downloads a photo from your license, the bandwidth required rises. Transaction processing is very delay sensitive because with transactions, you generally have a person waiting for something to be completed (e.g., for a reservation to be made, for a sales transaction to be approved, for a seat to be assigned by an airline). Users want subsecond response time, so with transaction processing, minimizing delays is very important, and increased traffic contributes to delay. For example, say you're at an airport and your flight is canceled. Everyone queues up to get on another flight. The agents work as quickly as they can, but because of the increased level of traffic as more people try to get on the one available flight, everything backs up, and you have to wait a long time for a response. With transaction processing, you have to be aware of delay, and error control is the responsibility of the endpoints. Transaction processing is fairly tolerant of losses because the applications ensure that all the elements and records associated with a particular transaction have been properly sent and received before committing the transaction to the underlying database.

Another type of application is file transfer, which involves moving a large amount of data from one computer to another. File transfer is generally a high-bandwidth application because it deals with a bulk of data, particularly if you are in a hurry to receive the entire file. File transfer is machine-to-machine communication, and the machines can work around delay factors, as long as they're not trying to perform a real-time function based on the information being delivered. File transfer is a passive activitythat is, it does not drive process controland it can tolerate delay. File transfer can also tolerate losses because a reliable protocol such as TCP ensures that any errored or lost data is retransmitted, hence no data is lost. With file transfer, error control can be performed at the endpoints.

Two other important applications are interactive computing and information retrieval. With these applications, bandwidth is dependent on the objects being retrieved: If it's text, it's low bandwidth; if it's streaming or interactive video, the experience may not always be satisfactory. Interactive computing and information retrieval are delay sensitive when it comes to downloads, so higher speeds are preferred. Real-time voice is a low-bandwidth application but is extremely delay sensitive. Real-time audio and video require medium, high, and even very high bandwidth, and they are extremely delay sensitive (both end-to-end delay and jitter), and the applications work much better if the packets arrive in their original sequence. Multimedia traffic and interactive services require very high bandwidth, and they are also extremely sensitive to end-to-end delay and jitter, perform better if the packets arrive in their original sequence, and are extremely loss sensitive.

Anything that is text-basedsuch as e-mail, transaction processing, file transfer, and even the ability to access a database for text-based informationis fairly tolerant of losses. But in real-time trafficsuch as voice, audio, or videolosses cause severe degradation in the application. For new generations of networks, the ITU (www.itu.int) suggests that packet loss should not exceed 1%; that's far from the case in today's networks. The public Internet, being a global infrastructure, has a wide range of experiences. In some parts of the world, generally in developing countries, packet losses can surpass 40% during peak hours, while developed nations have an average of approximately 5% during the day. To take a look at Internet traffic statistics, including measurements for response times (delays) and packet losses, you can visit www.internettrafficreport.com.


Part I: Communications Fundamentals

Telecommunications Technology Fundamentals

Traditional Transmission Media

Establishing Communications Channels

The PSTN

Part II: Data Networking and the Internet

Data Communications Basics

Local Area Networking

Wide Area Networking

The Internet and IP Infrastructures

Part III: The New Generation of Networks

IP Services

Next-Generation Networks

Optical Networking

Broadband Access Alternatives

Part IV: Wireless Communications

Wireless Communications Basics

Wireless WANs

WMANs, WLANs, and WPANs

Emerging Wireless Applications

Glossary



Telecommunications Essentials(c) The Complete Global Source
Telecommunications Essentials, Second Edition: The Complete Global Source (2nd Edition)
ISBN: 0321427610
EAN: 2147483647
Year: 2007
Pages: 160

Flylib.com © 2008-2020.
If you may any questions please contact us: flylib@qtcs.net