Data Flow

Data Flow

This section discusses some of the important issues that affect data flow in a network:

         The parts of the data circuit that comprises every network, including data terminal equipment (DTE), the data communications (or channel) equipment (DCE), the transmission channel, and the physical interface

         Modems and modulation

         Simplex, half-duplex, and full-duplex data transmission

         Coding schemes

         Asynchronous and synchronous transmission modes

         Error control

The DTE, the DCE, the Transmission Channel, and the Physical Interface

Every data network is a seven-part data circuit: the originating DTE, its physical interface, the originating DCE, the transmission channel, the receiving DCE, its physical interface, and the receiving DTE (see Figure 6.1). The transmission channel is the network service that the user subscribes to with a carrier (for example, a dialup connection with an ISP).

Figure 6.1. The DTE, DCE, transmission channel, and physical interface

graphics/06fig01.gif

The DTE transmits data between two points without error; its main responsibilities are to transmit and receive information and to perform error control. The DTE generally supports the end-user applications program, data files, and databases. The DTE includes any type of computer terminal, including PCs, as well as printers, hosts, front-end processors, multiplexers, and LAN interconnection devices such as routers.

The DCE, on the other hand, provides an interface between the DTE and the transmission channel (that is, between the carrier's networks). The DCE establishes, maintains, and terminates a connection between the DTE and the transmission channel. It is responsible for ensuring that the signal that comes out of the DTE is compatible with the requirements of the transmission channel. So, for instance, with an analog voice-grade line, the DCE would be responsible for translating the digital data coming from the PC into an analog form that could be transmitted over that voice-grade line. A variety of different conversions (for example, digital-to-analog conversion, conversion in voltage levels) might need to take place in a network, depending on the network service. The DCE contains the signal coding that makes these conversions possible. For example, a DCE might have to determine what type of voltage level to attribute to the one bit versus the zero bit. There are rules about how many of one type of bit you can send in a row, and if too many of them are sent in sequence, the network can lose synchronization, and then transmission errors might be introduced. The DCE applies such rules and performs the needed signal conversions. Examples of DCEs include channel service units (CSUs), data service units (DSUs), network termination units, PBX data terminal interfaces, and modems. DCEs all perform essentially the same generic function, but the names differ, depending on the type of network service to which they're attached.

Another part of a data network is the physical interface, which defines how many pins are in the connector, how many wires are in the cable, and what signal is being carried over which of the pins and over which of the wires, to ensure that the information is being viewed compatibly. In Figure 6.1, the lines that join the DTE and DCE together represent the physical interface. There are many different forms of physical interfaces; for example, an RS-232 is used frequently for asynchronous communications and a V.35 is often used with synchronous communications.

Modems and Modulation

No discussion of data communications is complete without a discussion of modulation. As mentioned in Chapter 2, "Telecommunications Technology Fundamentals," the term modem is a contraction of the terms modulate and demodulate, and these terms refer to the fact that a modem alters a carrier signal based on whether it is transmitting a one or a zero. Digital transmission requires the use of modulation schemes, which are sometimes also called line-coding techniques. Modulation schemes infuse the digital information onto the transmission medium (see Figure 6.2). Over time, many modulation schemes have been developed, and they vary in the speed at which they operate, the quality of wire they require, their immunity to noise, and their complexity. The variety of modulation schemes means that incompatibilities exist.

Figure 6.2. Modems

graphics/06fig02.gif

Components of Modulation Schemes

Modems can permute any of the three main characteristics of analog wave forms amplitude, frequency, and phase to encode information (see Figure 6.3):

Figure 6.3. Amplitude, frequency, and phase modulation

graphics/06fig03.gif

         Amplitude modulation A modem that relies on amplitude modulation might associate ones with a high amplitude and zeros with a low amplitude. A compatible receiving modem can discriminate between the two bits and properly interpret them so that the receiving device can reproduce the message correctly.

         Frequency modulation A frequency modulation-based modem alters the frequency value, so in Figure 6.3, zero represents a low frequency and one represents a high frequency; a complementary modem discriminates based on the frequency at which it receives the signal.

         Phase modulation Phase modulation refers to the position of the wave form at a particular instant in time, so we could have a 90-degree phase, a 180-degree phase, or a 270-degree phase. A phase modulation-based modem uses the phases to differentiate between ones and zeros, so, for example, zeros can be transmitted beginning at a 90-degree phase and ones may be transmitted beginning at a 270-degree phase.

Thus, by using the three characteristics of a wave form, a modem can encode multiple bits within a single cycle of the wave form (see Figure 6.4). The more of these variables the modem can detect, the greater the bit rate it can produce.

Figure 6.4. Signal modulation

graphics/06fig04.gif

Modulation schemes also vary in their spectral efficiency, which is a measure of the number of digital bits that can be encoded in a single cycle of a wave form. The duration of a single cycle of a wave form is called the symbol time. To get more bits per Hertz, many modulation techniques provide more voltage levels. To encode k bits in the same symbol time, 2k voltage levels are required. It becomes more difficult for the receiver to discriminate among many voltage levels with consistent precision as the speed increases. So it becomes a challenge to discriminate at a very high data rate. (Chapter 14, "Wireless Communications," talks more about spectrum reuse.)

Categories of Modulation Schemes

There are several different categories of modulation schemes. The first one is called the single-carrier modulation scheme, in which a single channel occupies the entire bandwidth. The second category is the multicarrier modulation scheme, which uses and aggregates a certain amount of bandwidth and then subdivides it into subbands. Each subband is encoded by using a single-carrier technique, and bit streams from the subbands are bonded together at the receiver. This allows us to avoid placing any bits on portions of the frequency band that may be subject to noise and might result in distortion. Multicarrier techniques became possible with the development of digital signal processing (DSP). Table 6.2 lists some of the most commonly used modulation schemes, and the following sections describe them in more detail.

Table 6.2. Single-Carrier and Multicarrier Modulation Schemes

Scheme

Description

Single-Carrier

2B1Q

Used with ISDN, IDSL, and HDSL.

QAM 64

Used with North American and European digital cable for forward (that is, downstream) channels.

QAM 256

Used with North American digital cable for forward (that is, downstream) channels.

QAM 16

Used with U.S. digital cable for reverse (that is, upstream) channels.

QPSK

Used in U.S. digital cable for reverse (that is, upstream) channels, as well as in direct broadcast satellite.

CAP

Used in some ADSL deployments.

Multicarrier

DMT

Used within ADSL and is a preferred technique because it provides good quality.

OFDM

Used in European digital over-the-air broadcast.

Single-Carrier Modulation Schemes The single-carrier scheme Quadrature Amplitude Modulation (QAM) modulates both the amplitude and phase. Because it uses both amplitude and phase, QAM yields a higher spectral efficiency than does 2B1Q, which means it provides more bits per second. The number of levels of amplitude and the number of phase angles are a function of line quality. Cleaner lines translate into more spectral efficiency or more bits per Hz. Various levels of QAM exist, and they are referred to as QAM nn, where nn indicates the number of states per Hertz. The number of bits per symbol time is k, where 2k = nn. So, 4 bits/Hz is equivalent to QAM 16, 6 bits/Hz is equivalent to QAM 64, and 8 bits/Hz is equivalent to QAM 256. As you can see, QAM has vastly improved throughput as compared to earlier techniques such as 2B1Q, which provided only 2 bits/Hz.

Quadrature Phase Shift Keying (QPSK) is another single-carrier scheme. It is equivalent to QAM 4, with which you get 2 bits per symbol time. QPSK is designed to operate in harsh environments, such as over-the-air transmission and cable TV return paths. Because of its robustness and relatively low complexity, QPSK is widely used in cases such as direct broadcast satellite. Although QPSK does not provide as many bits per second as some other schemes, it ensures quality in implementations where interference could be a problem.

Carrierless Amplitude Modulation/Phase Modulation (CAP) is another single-carrier scheme. CAP combines amplitude and phase modulation, and it is one of the early techniques used for ADSL. However, we have found that portions of the band over which ADSL operates conduct noise from exterior devices such as ham radios and CB radios, so if these devices are operating while you're on a call over an ADSL line, you experience static in the voice call or corrupted bits in a data session. Consequently, CAP is no longer the preferred technique with ADSL because it provides a rather low quality of service. (ADSL is discussed in Chapter 3, "Transmission Media: Characteristics and Applications," and in Chapter 13, "Broadband Access Solutions.")

Multicarrier Modulation Schemes Discrete Multitone (DMT) is a multicarrier scheme that allows variable spectral efficiency among the subbands it creates. Therefore, it is used in wireline media, where noise characteristics of each wire might differ, as in the wires used to carry xDSL facilities.Because spectral efficiency can be optimized for each individual wire with DMT, DMT has become the preferred choice for use with ADSL.

Orthogonal Frequency Division Multiplexing (OFDM) is another multicarrier technique, and it uses a common modulation technique for each subband. OFDM is generally used in over-the-air broadcast, where all subbands are presumed to have uniform noise characteristics, and it is predominantly used in Europe, although some emerging techniques in the United States plan to make use of OFDM.

Simplex, Half-Duplex, and Full-Duplex Data Transmission

The direction of information flow takes three forms: simplex, half-duplex, and full-duplex (see Figure 6.5).

Figure 6.5. Simplex, half-duplex, and full-duplex data transmission

graphics/06fig05.gif

Simplex means that you can transmit information in one direction only. Of course simplex does not have great appeal to today's business communications, which involve a two-way exchange. Nonetheless, there are many applications of simplex circuits, such as a doorbell at your home. When someone presses the button, a signal goes to the chimes, and nothing returns over that pair of wires. Another example of a simplex application is an alarm circuit. If someone opens a door he's not authorized to open, a signal is sent out over the wires to the security desk, but nothing comes back over the wires.

Half-duplex means you can transmit information in two directions, but in only one direction at a time (for example, with a pair of walkie-talkies). Half-duplex is associated with two-wire circuits, which have one path to carry information and a second wire or path to complete the electrical loop. In half-duplex we can communicate in both directions, but we can use only one direction at a time. As a result, there has to be a procedure for manipulating who's seen as the transmitter and who's seen as the receiver, and there has to be a way to reverse who acts as the receiver and who acts as the transmitter. Line turnarounds handle these reversals, but they add overhead to a session because the devices undertake a dialog to determine who is the transmitter and who is the receiver. For communication that involves much back-and-forth exchange of data, half-duplex is an inefficient way of communicating.

Full-duplex, also referred to simply as duplex, involves a four-wire circuit, and it provides the capability to communicate in two directions simultaneously. There's an individual transmit and receive path for each end of the conversation. Therefore, no line turnarounds are required, which means full-duplex offers the most efficient form of data communication. All digital services are provisioned on a four-wire circuit and hence provide full-duplex capabilities.

Coding Schemes: ASCII, EBCDIC, Unicode, and Beyond

A coding scheme is a pattern of bits that are used to represent the characters in a character set, as well as carriage returns and other keyboard functions. Over time, different computer manufacturers and consortiums have introduced different coding schemes. The most commonly used coding schemes are ASCII, EBCDIC, and Unicode.

The American Standard Code for Information Interchange (ASCII) is probably the most familiar coding scheme. ASCII has seven information bits per character, and it has one additional bit that's a control bit, called a parity bit, that is used for purposes of error detection. In ASCII, seven ones or zeros are bundled together to represent each character. A total of 128 characters (that is, 27, for the seven bits of information per character and the two possible values of each character) can be represented in ASCII coding.

At about the same time that the whole world agreed on ASCII as a common coding scheme, IBM introduced its own proprietary scheme, called Extended Binary Coded Decimal Interchange Code (EBCDIC). EBCDIC involves eight bits of information per character and no control bits. Therefore, you can represent 256 possible characters (that is, 28) with EBCDIC. This sounds like a lot of characters, but it's not enough to handle all the characters needed in the languages throughout the world. Complex Asian languages, for instance, can include up to 60,000 characters.

In Table 6.3, you can see that the uppercase letter A in ASCII coding looks quite different than it does in EBCDIC. This could be a source of incompatibility. If your workstation is coded in ASCII and you're trying to communicate with a host that's looking for EBCDIC, you will end up with garbage on your screen because your machine will not be able to understand the alphabet that the host is using.

Table 6.3. ASCII Versus EBCDIC

Character or Symbol

ASCII

EBCDIC

A

1000001

11000001

K

1001011

11010010

M

1001101

11010100

2

0110010

11110010

Carriage return

0001101

00010101

In the mid-1980s, a coding scheme called Unicode was formed. Unicode assigns 16 bits per character (that is, 216), which translates to more than 65,000 possible characters. But can you imagine a terminal with 60,000 keys to press? Despite its breadth, Unicode has not become an overriding standard for those who use complex languages.

Most people now believe that the best way to handle coding is to use natural language interfaces, such as voice recognition. By 2008 or 2009, natural language interfaces are expected to be the most common form of data entry. But until we get there, you should know that there are different coding schemes because they could be a potential source of incompatibility in a network and you therefore need to consider conversion between schemes. Conversion could be performed by a network element on the customer premise, or it could be a function that a network provider offers. In fact, the early packet-switched X.25 networks provided code conversion as a value-added feature.

Transmission Modes: Asynchronous and Synchronous Transmission

Another concept to be familiar with is the distinction between transmission modes. To appreciate the distinction, let's look at the historical time line again. The first type of terminals that were introduced were dumb terminals. They had no processing capabilities and no memories. They had no clocking references, so the only way they could determine where to find the beginning or the end of a character was by framing the character with start and stop bits. These systems used asynchronous transmission, in which one character is transmitted at a time, at a variable speed (that is, the speed depends on things such as how quickly you type or whether you stop to answer the phone). Asynchronous transmission uses a start bit and a stop bit with each character. In addition, asynchronous communication typically deals with ASCII-encoded information, which means a third control bit, a parity bit, needs to be accounted for. These extra control bits add up to fairly significant overhead. In essence, asynchronous transmission has 30% inefficiency because for every seven bits of information, there are three bits of control. Another disadvantage of asynchronous transmission is that it operates at comparatively low speeds; today, in general, it operates at about 115,000bps.

Synchronous transmission emerged in the late 1960s, when IBM introduced its interactive processing line, which included smart terminals. These smart terminals could process information and use algorithms; for example, a terminal could use an algorithm on a message block to determine what it was composed of and in that way very succinctly detect and check for errors. Smart terminals were also smart in the sense that they had buffers, so they could accumulate the characters that you were typing in until they had a big block of them that they could send all at one time. Smart terminals also had clocking devices, whereby on one pair of wires, a clocking pulse could be sent from the transmitter to the receiver. The receiver would lock in on that clocking pulse, and it could determine that with every clocking pulse it saw on one wire, it would have a bit of information present on the other wire. Therefore, the receiver could use the clocking pulse to simply count off the bits to determine where the beginning and the end of the character were, rather than actually having to frame each character with a start and a stop bit. Synchronous transmission, in classic data communications, implied that you were sending information a block at a time at a fixed speed.

Another benefit of synchronous transmission is very tight error control. As mentioned earlier, smart terminals have processors and can apply mathematical algorithms to a block. By calculating the contents of that block, the terminal comes up with a 16- or 32-bit code that identifies the structure of the block's contents. The terminal adds this code to the end of the block and sends it to the receiver. The receiver performs the same map on the block, and it comes up with its own 16- or 32-bit code. The receiver then compares its code with the one the terminal sent, and if they match, the receiver sends an ACK, a positive acknowledgment that everything's okay, and it moves on to sending the next block. If the two codes don't match, the receiver sends a NACK, a negative acknowledgment, which says there was an error in transmission and the previous block needs to be resent before anything else can happen. If that error is not corrected within some number of attempts that the user specifies, the receiver will disengage the session. This ensures that errors are not introduced. Yet another benefit of synchronous transmission is that it operates at higher speeds than asynchronous transmission, and today you commonly see it performing at 2Mbps.

These two types of transmission make sense in different applications. For machine-to-machine communications where you want to take advantage of high speeds and you want to guarantee accuracy in the data flow such as electronic funds transfer synchronous communication is best. On the other hand, in a situation in which a human is accessing a database or reading today's horoscope, speed may not be of the essence and error control may not be critical, so the lower-cost asynchronous method would be appropriate.

Keep in mind that things are never simple in telecom, and you rarely deal with simple alternatives; rather, you deal with layers and combinations of issues. Consider the following human example. You can think of an escalator as being a synchronous network. The steps are presented at the same rate consistently, and they all travel up the ramp at the same speed. Passengers alight on steps, and all passengers are carried through that network at the same speed; therefore, the network is synchronous. However, each passenger alights on the escalator at a different rate, which makes the access to the network asynchronous. For example, an eight-year-old child might run up to the escalator at high speed and jump straight onto the third step. Behind that child might be an injured athlete with a cane, who cautiously waits while several stair pass, until he feels confident that he's going to step on the center of the stair. So people get on the escalator at varying rates and in different places; there is not consistent timing that determines their presence, and therefore the access onto the escalator is asynchronous.

Now let's put this human analogy into telecommunications terms. The escalator scenario describes the modern broadband network. SDH/SONET is a synchronous network infrastructure. When bits get into an SDH/SONET frame, they all travel at OC-3 or OC-12 or one of the other line rates that SDH/SONET supports. But access onto that network might be asynchronous, through an ATM switch, where a movie might be coming in like a fire hose of information through one interface and next to it a dribble of text-based e-mail is slowly coming in. One stream of bits comes in quickly and one comes in slowly, but when they get packaged together into a frame for transport over the fiber, they're all transported at the same rate.

Error Control

Error control, which is a process of detecting and correcting errors, takes a number of forms, the two most common of which are parity checking and cyclical redundancy checking.

In ASCII-based terminals, which use asynchronous transmission, most often the error control is parity checking. Parity checking is a simple process of adding up the bit values to come up with a common value, either even or odd. It doesn't matter which one, but once you've selected either even or odd, every terminal must be set to that value. Let's say we're using odd parity. If you look at Character #1 in Figure 6.6, and add up the bits for it, you see that they equal 2, which is an even number. We need odd parity, so the terminal inserts a 1 bit to make that a 3, which is an odd number. For Character #2 the bits add up to 3, so the terminal inserts a 0 as a parity bit to maintain the odd value. The terminal follows this pattern with each of the six characters, and then it sends all the bits across the network to the receiver. The receiver then adds up the bits the same way the terminal did, and if they equal an odd number, the receiver assumes that everything has arrived correctly. If they don't equal an odd number, the receiver knows that there is a problem but cannot correct the problem and this is the trouble with parity checking. To determine that an error had occurred, you would have to look at the output report, and therefore errors can easily go unnoticed. Thus, parity checking is not the best technique when it comes to ensuring the correctness of information.

Figure 6.6. Parity checking

graphics/06fig06.gif

Synchronous terminals and transmission uses a type of error control called cyclical redundancy checking (see Figure 6.7). This is the method mentioned earlier in the chapter, whereby the entire message block is run through a mathematical algorithm. A cyclical redundancy check (CRC) code is appended to the message, and the message is sent to the receiver. The receiver recalculates the message block and compares the two CRCs. If they match, the communication continues, and if they don't match, the receiver either requests retransmissions until the problem is fixed or it disengages the session if it is not capable of being fixed within some predetermined timeframe.

Figure 6.7. Cyclical redundancy checking

graphics/06fig07.gif

 



Telecommunications Essentials
Telecommunications Essentials: The Complete Global Source for Communications Fundamentals, Data Networking and the Internet, and Next-Generation Networks
ISBN: 0201760320
EAN: 2147483647
Year: 2005
Pages: 84

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net