The OSI Reference Model


When networks first came into being, computers could typically communicate only with computers from the same manufacturer. For example, companies ran either a complete DECnet solution or an IBM solution—not both together. Unfortunately, companies that worked together or with government agencies often had equipment from different sources. So, the ability to communicate through their networks was at a dead-end, or someone had to incur the exorbitant cost of switching their equipment to their partner’s manufacturer. Ouch! In the early 1980s, the OSI (Open Systems Interconnection) reference model was created by the International Standards Organization (ISO) to break this barrier. This model was meant to help vendors create interoperable network devices. Since each vendor’s products have distinctive attributes and incorporate trade “secrets,” like world peace, it will probably never happen completely, but it’s still a great goal.

The OSI model is the primary architectural model for networks. It describes how data and network information are communicated from applications on one computer, through the network media, to an application on another computer. The OSI reference model breaks this approach into layers.

The Layered Approach

A reference model is a conceptual blueprint of how communications should take place. It addresses all the processes required for effective communication and divides these processes into logical groupings called layers. When a communication system is designed in this manner, it’s known as layered architecture.

Think of it like this: You and some friends want to start a company. One of the first things you would do is sit down and think through the tasks that must be done, who will do them, in what order, and how they relate to each other. Ultimately, you might group these tasks into departments. Let’s say you decide to have an order-taking department, an inventory department, and a shipping department. Each of your departments has its own unique tasks, keeping its staff busy and requiring them to focus on only their own duties.

In this scenario, departments are a metaphor for the layers in a communication system. For the system to run smoothly, the staff of each department will have to both trust and rely heavily on the others to do their jobs and competently handle their unique responsibilities. In your planning sessions, you would probably take notes, recording the entire process to facilitate later discussions about the standards of operation that will serve as your business blueprint, or reference model.

Once your business is launched, your department heads, armed with the part of the blueprint relating to their department, will need to develop practical methods to implement the tasks assigned to them. These practical methods, or protocols, will need to be classified into a Standard Operating Procedures manual and followed closely. Each of the various procedures in your manual will have been included for different reasons and will have varying degrees of importance and implementation. If you form a partnership or acquire another company, it will be imperative for its business protocols—its business blueprint—to match, or be compatible with, yours.

start sidebar
Real World Scenario—Business Flexibility

“Blessed are the adaptable, for they shall not be broken” is today’s business rule, meaning your business blueprint must be able to change. Suppose, after being in operation for a while, you find that many people developing your product or working on your production line have begun taking how-to-use-the-product calls, taking time away from the jobs you’re paying them to do. You wisely adapt your order-taking department into a customer-service center. Of course, this means training these employees to walk customers through the “how-to” of your product. You may even find later that splitting the new call center into “order takers” and “customer service reps” gives you greater speed in handling customer calls.

end sidebar

Similarly, software developers can use a reference model to understand computer communication processes and to see what types of functions need to be accomplished on any one layer. If they are developing a protocol for a certain layer, all they need to concern themselves with is their specific layer’s functions, not those of any other layer. Some other layer and protocol will handle the other functions. The technical term for this idea is binding. The communication processes that are related to each other are bound, or grouped together, at a particular layer.

Advantages of Reference Models

There are many advantages to using a reference model. Remember, because developers know that another layer will handle the functions they’re not currently working on, they can confidently focus on just one layer’s functions. This promotes specialization. Another benefit is that if changes are made to one layer, it doesn’t necessarily change anything with the other layers.

Suppose an executive in your company in the management layer sends a letter. This person doesn’t necessarily care if the company’s shipping department, a different layer, changes from UPS to Federal Express, or vice versa. All the executive is concerned with is the letter and its recipient. It is someone else’s job to see to its delivery. The technical phrase for this idea is loose coupling—though linked, they don’t meddle in someone else’s layer. You’ve probably heard phrases like, “It’s not my fault; it’s not my department!” or “So-and-So’s group always messes up stuff like this; we never do!” Loose coupling provides a stable protocol suite. Passing the buck doesn’t.

Another big advantage is compatibility. If software developers adhere to the specifications outlined in the reference model, all the protocols written to conform to that model will work together. This is very good. Compatibility creates the foundation for a large number of protocols to be written and used.

Let’s review why the industry uses a layered model:

  • It clarifies the general functions rather than the specifics on how to do it.

  • It takes the overall complexity of networking and divides it into more manageable pieces or layers.

  • It uses standard interfaces to enable ease of interoperability.

  • Developers can change the features of one layer without changing code in other layers.

  • It encourages compatibility.

  • It allows specialization, which helps industry progress.

  • It eases troubleshooting.

Physical and Logical Data Movement

The two additional concepts that need to be addressed in a reference model are the physical movement of data and the logical movement of data.

As illustrated in Figure 1.6, the physical movement of data begins in the top layer and proceeds down the model, layer by layer. It works like this: Someone creates some information on an application at the top layer. Protocols there pass it down to a communication protocol that packages it, then hands it down to a transmission protocol for the data’s actual physical transmission. The data then moves across the model, across some type of physical channel like cable, fiber, radio frequencies, or microwaves.

click to expand
Figure 1.6: Physical data flow through a model

When the data reaches the destination computer, it moves up the model. Each layer at the destination sees and deals with only the data that was packaged by its counterpart on the sending side. Referring back to the analogy about the executive and the letter, the shipping department at the destination sees only the shipping packaging and the information provided by the sending side’s shipping department. The destination’s shipping department does not see the actual letter because peeking into mail addressed to someone else is a federal offense—it’s against proper protocol. The destination company’s executive up on the top layer is the one who will actually open and further process the letter.

The logical movement of data is another concept addressed in a reference model. From this perspective, each layer is communicating with only its counterpart layer on the other side (see Figure 1.7). Communication in the realm of humans flows best when it happens between peers—between people on the same level, or layer, in life. The more we have in common, the more similarities in our personalities, experiences, and occupations, the easier it is for us to relate to one another, for us to connect. It’s the same with computers. This type of logical communication is called peer-to-peer communication. When more than one protocol is needed to successfully complete a communication process, the protocols are grouped into a team called a protocol stack. Layers in a system’s protocol stack communicate only with the corresponding layers in another system’s protocol stack.

click to expand
Figure 1.7: Logical data flow between peer layers

The OSI Layers

The International Standards Organization (ISO) is the Emily Post of the network protocol world. Just like Ms. Post, who wrote the book setting the standards—or protocols—for human social interaction, the ISO developed the OSI reference model as the guide and precedent for an open network protocol set. Defining the etiquette of communication models, it remains today the most popular means of comparison for protocol suites. The OSI reference model has seven layers:

  • Application

  • Presentation

  • Session

  • Transport

  • Network

  • Data Link

  • Physical

Figure 1.8 shows the way these layers fit together.

click to expand
Figure 1.8: The layers of the OSI reference model

The ISO model’s top three layers—Application, Presentation, and Session— deal with functions that aid applications in communicating with other applications. They specifically deal with tasks like filename formats, code sets, user interfaces, compression, encryption, and other functions relating to the exchange occurring between applications.

Figure 1.9 shows the functions defined at each layer of the OSI model. The following pages discuss the functions of each layer in detail.

click to expand
Figure 1.9: OSI layer functions

The Application Layer

The Application layer of the OSI model supports the components that deal with the communicating aspects of an application. The Application layer is responsible for identifying and establishing the availability of the intended communication partner. It is also responsible for determining if sufficient resources for the intended communication exist.

Although computer applications sometimes require only desktop resources, applications may unite communicating components from more than one network application, for example, file transfers, e-mail, remote access, network management activities, client/server processes, and information location. Many network applications provide services for communication over enterprise networks, but for present and future internetworking, the need is fast developing to reach beyond their limits. For the new millennium and beyond, transactions and information exchanges between organizations are broadening to require internetworking applications like the following:

World Wide Web (WWW) The Web connects countless servers (the number seems to grow with each passing day) presenting diverse formats. Most are multimedia and include some or all of the following: graphics, text, video, and even sound. Netscape Navigator, Internet Explorer, and other browsers like Mosaic simplify both accessing and viewing web sites.

E-mail gateways E-mail gateways are versatile and can use Simple Mail Transfer Protocol (SMTP) or the X.400 standard to deliver messages between different e-mail applications.

Electronic data interchange (EDI) EDI is a composite of specialized standards and processes that facilitates the flow of tasks such as accounting, shipping/receiving, and order and inventory tracking between businesses.

Special interest bulletin boards Special interest bulletin boards include the many chat rooms on the Internet where people can connect and communicate with each other either by posting messages or by typing a live conversation. They can also share public domain software.

Internet navigation utilities Applications like Gopher and WAIS, as well as search engines like Yahoo!, Google, Excite, and Alta Vista, help users locate the resources and information they need on the Internet.

Financial transaction services Certain services target the financial community. They gather and sell information pertaining to investments, market trading, commodities, currency exchange rates, and credit data to their subscribers.

The Presentation Layer

The Presentation layer gets its name from its purpose: it presents data to the Application layer. The Presentation layer is essentially a translator. A successful data transfer technique is to adapt the data into a standard format before transmission. Of course, just like any translation, there is a cost in time. Computers are configured to receive this generically formatted data and then convert the data back into its native format for actual reading (for example, EBCDIC to ASCII).

The OSI has protocol standards that define how standard data should be formatted. Tasks like data compression, decompression, encryption, and decryption are associated with the Presentation layer.

The Abstract Syntax Notation 1 (ASN.1) is the standard data syntax used by the Presentation layer. This kind of standardization is necessary when transmitting numerical data that is represented very differently by various computer systems’ architectures. A good example is the Simple Network Management Protocol (SNMP), which uses ASN.1 to depict the composition of objects in a network management database.

Some Presentation layer standards are involved in multimedia operations. The following standards direct graphic and visual image presentation:

PICT This standard is a picture format used by Macintosh or PowerPC programs for transferring QuickDraw graphics.

TIFF (Tagged Image File Format) This standard is a standard graphics format for high-resolution, bitmapped images.

JPEG The Joint Photographic Experts Group bring this standard to us.

Other standards guide movies and sound:

MIDI (Musical Instrument Digital Interface) This standard is used for digitized music.

MPEG The Motion Picture Experts Group’s standard for the compression and coding of motion video for CDs, MPEG, is increasingly popular. It provides digital storage and bit rates up to 1.5Mbps.

QuickTime

This standard is designed for use with Macintosh or PowerPC programs; it manages audio and video applications.

The Session Layer

The jobs of the Session layer can be likened to that of a mediator or referee. Its central concern is dialog control between devices, or nodes. Responsible for coordinating communication between systems, the Session layer organizes their communication by offering three different modes: simplex, half-duplex, and full-duplex. It also splits up a communication session into three different phases. These phases are connection establishment, data transfer, and connection release.

In simplex mode, communication is actually a monologue, with one device transmitting and another receiving. To get a picture of this, think of the telegraph machine’s form of communication: .....--..----...---..-...

When in half-duplex mode, nodes take turns transmitting and receiving— the computer equivalent of talking on a speakerphone. Some of us have experienced the speakerphone phenomenon of forbidden interruption. The speakerphone’s mechanism dictates that you may indeed speak your mind, but you have to wait until the person at the other end stops doing that first.

The only conversational proviso of full-duplex mode is flow control. This mitigates the problem of possible differences in the operating speeds of two nodes, where one may be transmitting faster than the other can receive. Other than that, communication between the two flows is unregulated, with both sides transmitting and receiving simultaneously.

Formal communication sessions occur in three phases. In the first, the connection-establishment phase, contact is secured and devices agree upon communication parameters and the protocols they’ll use. Next, in the data- transfer phase, these nodes engage in conversation, or dialog, and they exchange information. Finally, when they’re through communicating, nodes participate in a systematic release of their session.

A formal communication session is connection-oriented. In a situation where a large quantity of information is to be transmitted, the involved nodes agree upon rules for the creation of checkpoints along their transfer process. This somewhat resembles the many security checks for you and your luggage at the airport today. These rules are necessary in the case of an error occurring along the way. Among other things, they afford you the luxury of preserving your dignity in the face of your computers and co-workers. Let me explain. In the 44th minute of a 45-minute download, a loathsome error occurs again! This is the third try, and the file-to-be-had is needed more than sunshine. Without checkpoints in place, you would have to start all over again, potentially causing you to get more than just a little frustrated. To prevent this, checkpoints are secured—something called activity management—ensuring that the transmitting node has to retransmit only the data sent since the last checkpoint where the error occurred.

It is important to note that, in some networking situations, devices send out simple, one-frame status reports that aren’t sent in a formal session format. If they were, it would burden the network unnecessarily and result in lost economy. Instead, in these events, a connectionless approach is used, where the transmitting node simply sends off its data without establishing availability and without acknowledgment from its intended receiver. Think of connectionless communication like a message in a bottle: it’s short and sweet, it goes where the current takes it, and it arrives at an unsecured destination.

The following are some examples of Session layer protocols and interfaces:

NFS (Network File System) NFS was developed by Sun Microsystems and is used with TCP/IP and UNIX workstations to allow transparent access to remote resources.

SQL (Structured Query Language) SQL was developed by IBM and provides users with a simpler way to define their information requirements on both local and remote systems.

RPC (Remote Procedure Call) RPC is a broad client/server redirection tool used for disparate service environments. Its procedures are created on clients and performed on servers.

X Window Widely used by Linux, X Window is a graphical infrastructure.

ASP (AppleTalk Session Protocol) ASP is another client/server mechanism that both establishes and maintains sessions amid AppleTalk client and server machines.

DNA SCP (Digital Network Architecture Session Control Protocol) DNA SCP is a DECnet Session layer protocol.

The Transport Layer

Services located in the Transport layer both segment and reassemble data from upper-layer applications and unite it onto the same data stream. They provide end-to-end data transport services and establish a logical connection between the sending host and destination host on an internetwork. The Transport layer is responsible for providing mechanisms for multiplexing upper-layer application, session establishment, and teardown of virtual circuits. It also hides details of any network-dependent information from the higher layers by providing transparent data transfer.

Data integrity is ensured at this layer by maintaining flow control and by allowing users the option of requesting reliable data transport between systems. Flow control prevents the problem of a sending host on one side of the connection overflowing the buffers in the receiving host—an event that can result in lost data. Reliable data transport employs a connection-oriented communications session between systems, and the protocols involved ensure that the following will be achieved:

  • The segments delivered are acknowledged back to the sender upon their reception.

  • Any segments not acknowledged are retransmitted.

  • Segments are sequenced back into their proper order upon arrival at their destination.

  • A manageable data flow is maintained in order to avoid congestion, overloading, and the loss of any data.

An important reason for different layers to coexist within the OSI reference model is to allow for the sharing of a transport connection by more than one application. This sharing is available because the Transport layer’s functioning occurs segment by segment, and each segment is independent of the other segments. This allows different applications to send consecutive segments, processed on a first-come, first-served basis, that can be intended either for the same destination host or for multiple hosts.

Figure 1.10 shows how the Transport layer sends the data of several applications originating from a source host to communicate with parallel applications on one or many destination host(s). The specific port number for each software application is set by software within the source machine before transmission. When it transmits a message, the source computer includes extra bits that encode the type of message, the program with which it was created, and the protocols that were used. Each software application transmitting a data stream segment uses the same preordained port number. When it receives the data stream, the destination computers are empowered to sort and reunite each application’s segments, providing the Transport layer with all it needs to pass the data to its upper-layer peer application.

click to expand
Figure 1.10: Transport layer data segments sharing a traffic stream

In reliable transport operation, one user first establishes a connection- oriented session with its peer system. Figure 1.11 portrays a typical connection-oriented session taking place between sending and receiving systems. In it, both hosts’ application programs begin by notifying their individual operating systems that a connection is about to be initiated. The two operating systems communicate by sending messages over the network, confirming that the transfer is approved and that both sides are ready for it to take place. Once the required synchronization is complete, a connection is fully established and the data transfer begins.

click to expand
Figure 1.11: Establishing a connection-oriented session

While the information is being transferred between hosts, the two machines periodically check in with each other, communicating through their protocol software to ensure that all is going well and that the data is being received properly. The following list summarizes the steps in a connection-oriented session pictured in Figure 1.11:

  1. The first “connection agreement” segment is a request for synchronization.

  2. The second and third segments acknowledge the request and establish connection parameters between hosts.

  3. The final segment is also an acknowledgment. It notifies the destination host that the connection agreement is accepted and that the actual connection has been established. Data transfer can now begin.

During a transfer, congestion can occur because a high-speed computer is generating data traffic faster than the network can transfer it or because many computers are simultaneously sending datagrams (packets) through a single gateway or destination. In the latter case, a gateway or destination can become congested even though no single source caused the problem. In either case, the problem is basically akin to a freeway bottleneck—too much traffic for too small a capacity. Usually, no one car is the problem—there are simply too many cars on that freeway. And it is always rush hour when it comes to information flow!

When a machine receives a flood of datagrams too quickly for it to process, it stores them in memory (buffers them). This buffering action solves the problem only if the datagrams are part of a small burst. However, if the datagram deluge continues, a device’s memory will eventually be exhausted, its flood capacity will be exceeded, and it will discard any additional datagrams that arrive. But, no worries. Because of transport function, network flow control systems work quite well. Instead of dumping resources and allowing data to be lost, the transport can issue a “not ready” indicator, as shown in Figure 1.12, to the sender, or source, of the flood. This mechanism works somewhat like a stoplight, signaling the sending device to stop transmitting segment traffic to its overwhelmed peer. When the peer receiver has processed the segments already in its memory reservoir, it sends out a “ready” transport indicator. When the machine waiting to transmit the rest of its datagrams receives this “ready” indictor, it can then resume its transmission.

click to expand
Figure 1.12: Transmitting segments with flow control

In fundamental, reliable, connection-oriented data transfer, datagrams are delivered to the receiving host in exactly the same sequence they’re transmitted; the transmission fails if this order is breached. If any data segments are lost, duplicated, or damaged along the way, this will cause a failure to transmit. The answer to the problem is to have the receiving host acknowledge receiving each and every data segment.

Data throughput would be low if the transmitting machine had to wait for an acknowledgment after sending each segment. But because there’s time available after the sender transmits the data segment and before it finishes processing acknowledgments from the receiving machine, the sender uses the break to transmit more data. How many data segments the transmitting machine is allowed to send without receiving an acknowledgment for them is called a window.

Windowing controls how much information is transferred from one end to the other. While some protocols quantify information by observing the number of packets, TCP/IP measures it by counting the number of bytes. Figure 1.13 illustrates a window size of 1 and a window size of 3. When a window size of 1 is configured, the sending machine waits for an acknowledgment for each data segment it transmits before transmitting another. Configured to a window size of 3, it’s allowed to transmit three data segments before an acknowledgment is received. In this simplified example, both the sending and receiving machines are workstations. Reality is rarely that simple, and most often acknowledgments and packets will commingle as they travel over the network and pass through routers. Routing complicates things, but not to worry; we’ll be covering routing later in this book.

click to expand
Figure 1.13: TCP/IP window sizes

Reliable data delivery ensures the integrity of a stream of data sent from one machine to the other through a fully functional data link. It guarantees the data won’t be duplicated or lost. The method that achieves this is known as positive acknowledgment with retransmission. This technique requires a receiving machine to communicate with the transmitting source by sending an acknowledgment message back to the sender when it receives data. The sender documents each segment it sends and waits for this acknowledgment before sending the next segment. When it sends a segment, the transmitting machine starts a timer and retransmits if it expires before an acknowledgment for the segment is returned from the receiving end.

In Figure 1.14, the sending machine transmits segments 1, 2, and 3. The receiving node acknowledges it has received them by requesting segment 4. When it receives the acknowledgment, the sender then transmits segments 4, 5, and 6. If segment 5 doesn’t make it to the destination, the receiving node acknowledges that event with a request for the segment to be resent. The sending machine then resends the lost segment and waits for an acknowledgment, which it must receive in order to move on to the transmission of segment 7.

click to expand
Figure 1.14: Transport layer reliable delivery

The Network Layer

In the Roman Empire, all roads led to Rome. Today, our country is a maze of interstate highways, state highways, freeways, local roadways, and the like. So, you need a map to get where you want to go and find the best route. The same holds true with the complicated cloud of networks. And the proper path through them is determined by protocols residing in Layer 3: the Network layer. Path determination makes it possible for a router to appraise all available paths to a given destination and decide on the best one. Routers use network topology information when orienting themselves to the network and evaluating the different possible paths through it. These network “topo maps” can be configured by the network’s administrator or obtained through dynamic processes running on the network. The Network layer’s interface is connected to networks, and it is employed by the Transport layer to provide the best end-to-end packet delivery services. The job of sending packets from the source network to the destination network is the Network layer’s primary function. After the router decides on the best path from point A to point B, it proceeds with switching the packet onto it. This is known as packet switching. Essentially this is forwarding the packet received by the router on one network interface, or port, to the port that connects to the best path through the network cloud. That port will then send the packet to that particular packet’s destination.

An internetwork must continually designate all paths of its media connections. In Figure 1.15, each line connecting routers is numbered, and those numbers are used by routers as network addresses. These addresses possess and convey important information about the path of media connections. They’re used by routing protocols to pass packets from a source onward to its destination. The Network layer creates a composite “network map”—a communication strategy system—by combining information about the sets of links into an internetwork with path-determination, path-switching, and route-processing functions. It can also use these addresses to provide relay capability and to interconnect independent networks. Consistent across the entire internetwork, Layer 3 addresses also streamline the network’s performance by not forwarding unnecessary broadcasts that would eat up precious bandwidth. Unnecessary broadcasts increase the network’s overhead and waste capacity on any links and machines that don’t need to receive them. Using consistent end-to-end addressing that accurately describes the path of media connections enables the Network layer to determine the best path to a destination without encumbering the device or links on the internetwork with unnecessary broadcasts.

click to expand
Figure 1.15: Communicating through an internetwork

When an application on a host wants to send a packet to a destination device located on a different network, a data link frame is received on one of the router’s network interfaces. The router de-encapsulates and then examines the frame to establish what kind of Network layer data is in tow. After this is determined, the data is sent on to the appropriate Network layer process; but the frame’s mission is fulfilled and it is simply discarded.

Figure 1.16 illustrates the Network layer process that examines the packet’s header to discover which network it is destined for. It then refers to the routing table to find the connections that the current network has to foreign network interfaces. After one is selected, the packet is re-encapsulated in its data link frame with the selected interface’s information and queued for delivery off to the next hop in the path toward its destination. This process is repeated every time the packet switches to another router. When it finally reaches the router connected to the network on which the destination host is located, the packet is encapsulated in the destination LAN’s data link frame type. It’s now properly packaged and ready for delivery to the protocol stack on the destination host.

click to expand
Figure 1.16: The Network layer process

The following steps describe the Network layer process as shown in Figure 1.16:

  1. The sending PC sends a datagram to a PC located on Network 9.

  2. RouterA receives the datagram and checks the destination network. RouterA forwards the packet based on its knowledge of where the network is located.

  3. RouterB receives the packet and also checks the destination network. RouterB forwards this to RouterE after checking to find the best route to Network 9.

  4. RouterE receives the packet, puts it in a frame with the hardware destination of the receiving PC, and sends out the frame.

The Data Link Layer

The Data Link layer ensures that messages are delivered to the proper device and translates messages from up above into bits for the Physical layer to transmit. It formats the message into data frames and adds a customized header containing the hardware destination and source address. This added information forms a sort of capsule that surrounds the original message in much the same way that engines, navigational devices, and other tools were attached to the lunar modules of the Apollo project. These various pieces of equipment were useful only during certain stages of space flight and were stripped off the module and discarded when their designated stage was complete. Data traveling through networks is much the same. A data frame that’s all packaged up and ready to go follows the format illustrated in Figure 1.17.

click to expand
Figure 1.17: Ethernet II and 802.3 Ethernet frames

The various elements of a data frame are as follows:

  • The preamble or start indicator is made up of a special bit pattern that alerts devices to the beginning of a data frame.

  • The destination address (DA) is there for obvious reasons. The Data Link layer of every device on the network examines this to see if it matches its own address.

  • The source address (SA) is the address of the sending device. It exists to facilitate replies to the message.

  • In Ethernet II frames, the two-byte field following the source address is a Type field. This field specifies the upper-layer protocol that will receive the data after data link processing is complete.

  • In 802.3 frames, the two-byte field following the source address is a Length field. This indicates the number of bytes of data between this field and the frame check sequence (FCS) field. Following the length field could be an 802.2 header for Logical Link Control (LLC) information. This information is needed to specify the upper-layer process, because 802.3 doesn’t have a type field. Frame types will be discussed in detail in Chapter 3, “Network Protocols.”

  • The data is the actual message, plus all the information sent down to the sending device’s Data Link layer from the layers above it.

  • Finally, there’s the FCS field. Its purpose corresponds to its name, and it houses the cyclic redundancy checksum (CRC). The FCS allows the receiver to determine if a received frame was damaged or corrupted while in transit. CRCs work like this: The device sending the data determines a value summary for the CRC and stores it in the FCS field. The device on the receiving end performs the same procedure, then checks to see if its value matches the total, or sum, of the sending node; hence the term checksum.

Logical Link Control Sublayer

The LLC sublayer of the Data Link layer provides flexibility to the protocols running in the upper and lower layers. Notice that this is a sublayer of one of the seven layers we are discussing. As shown in Figure 1.18, the LLC runs between the Network layer and the MAC sublayer of the Data Link layer. This allows protocols at the Network layer, for example, the Internet Protocol (IP), to operate without the burden of having to be concerned with what’s happening at the Physical layer. Why? Because the Network layer’s protocol knows that the LLC sublayer is responsible for making sure the MAC sublayer and the Physical layer are doing their job. The LLC acts as a managing buffer between the “executive” upper layers and the “shipping department” lower layers. In turn, the lower-layer protocols don’t need to be concerned about what’s happening above.

The LLC sublayer uses source service access points (SSAPs) and destination service access points (DSAPs) to help the lower layers communicate to the Network layer protocols.

This is important because the MAC sublayer must understand what to do with the data after the frame header is stripped off. It has to know who to hand the data to; this is where the DSAPs and SSAPs come in. Imagine someone coming to your door and asking if it’s the correct address (hardware address). You respond, “Yes, what do you want?” The person (or data in the frame) responds, “I don’t know.” The service access points solve this problem by pointing to the upper-layer protocol, such as IP or IPX. In Figure 1.18, you can see that the 802.3 frame is not capable of handling DSAPs and SSAPs, so the 802.2 frame has to step in. The 802.2 frame is really an 802.3 frame with a DSAP and SSAP control field.

click to expand
Figure 1.18: The LLC sublayer of the Data Link layer

The LLC sublayer is also responsible for timing, flow control, and, with some protocol stacks, even connectionless and connection-oriented protocols.

MAC Sublayer

The MAC sublayer of the Data Link layer is responsible for framing. Again notice that this is a sublayer of one of the seven layers we are discussing. It builds frames from the 1s and 0s that the Physical layer picks up from the wire as a digital signal. It first checks the CRC to make sure nothing was damaged in transit; then it determines if the hardware address matches or not. If it does, the LLC then sends the data on to an upper-layer protocol. This layer will also accept a frame if the destination address is a broadcast or multicast.

The MAC sublayer is also responsible for media access. Through it, the workstation is allowed to communicate over the network. This is partly a hardware operation, but it is also partially a software procedure because it’s defined by both the network interface card (NIC) and the network card driver. We’ll cover this more thoroughly soon; for now, here is a description of the three types of media access:

Contention A good example of contention is found in an Ethernet network where all devices communicate whenever they have something to say. It’s pretty easy to imagine that in this scenario a data collision could easily occur if two devices were to “talk” at the same time. Because of this, in a contention network, the transmitting workstation must have control of the entire wire or network segment. Contention networks are great for small, bursty applications.

Token passing Used in Token Ring, FDDI, and ArcNET networks, stations cannot transmit until they receive a special frame called a token. This arrangement also works to prevent the collision problem. Token- passing networks work well if large, bandwidth-consuming applications are commonly used on the network.

Polling Polling is generally used in large mainframe environments where hosts are polled to see if they need to transmit. Hosts (secondaries) aren’t permitted to transmit until given permission from the primary host.

WAN Protocols at the Data Link Layer

WAN Data Link protocols describe how frames are carried between systems on a single data link. They include protocols designed to operate over dedicated point-to-point facilities, multi-point facilities that are based on dedicated facilities, and multi-access switched services like Frame Relay.

The typical encapsulation standards for synchronous serial lines at the Data Link layer are as follows:

HDLC (High-Level Data Link Control) HDLC is an MSA standard created by the ISO to support both point-to-point and multipoint configurations. It’s too bad that most vendors implement HDLC in different ways, often making HDLC incompatible between vendors. HDLC is the Cisco default protocol for all serial links, and it won’t communicate over a serial link with any other vendor’s HDLC protocol.

SDLC (Synchronous Data Link Control) SDLC is a protocol created by IBM to make it easier for their mainframes to connect to remote offices. Created for use in WANs, SDLC became extremely popular in the 1980s because many companies were installing 327x controllers in their remote offices for communication with the mainframe in the corporate office. SDLC defines and uses a polling media-access method. This means the primary, or front end, asks (polls) the secondaries, or 327x controllers, to find out if they need to communicate with it. Secondaries can’t speak unless spoken to, nor can they speak to each other.

LAPB (Link Access Procedure, Balanced) Created for use with X.25, LAPB defines frames and is capable of detecting out-of-sequence or missing frames. It also retransmits, exchanges, and acknowledges frames.

X.25 X.25 was the first packet-switching network. This defines the point-to-point communication between a DTE (data terminal equipment) and a DCE (data communications equipment) and supports both switched virtual circuits (SVCs) and permanent virtual circuits (PVCs). Cisco routers (DTEs) connect to modems or DSU/CSUs (DCEs).

SLIP (Serial Line IP) SLIP is an industry standard that was developed in 1984 to support TCP/IP networking over low-speed serial interfaces in Berkeley UNIX. With the Windows NT RAS service, Windows NT computers can use TCP/IP and SLIP to communicate with remote hosts.

PPP (Point-to-Point Protocol) Think of PPP as SLIP’s big brother. It takes the specifications of SLIP and builds on it by adding login, password, and error-correction capabilities. PPP is a Data Link protocol that can be used by many network protocols like IP, IPX, and AppleTalk. See RFC 1661 for more information, as described by the Internet Engineering Task Force (IETF).

ISDN (Integrated Services Digital Network) ISDN operates through analog phone lines that have been converted to use digital signaling. With ISDN you can transmit both voice and data.

Frame Relay Frame Relay is an upgrade from X.25 to be used where LAPB is no longer utilized. It’s the fastest of the WAN protocols listed because of its simplified framing approach, which has no error correction. Frame Relay uses SVCs, PVCs, and data link connection identifiers (DLCIs) for addressing; plus, it requires access to the high-quality digital facilities of the phone company, so it’s not available everywhere.

The Physical Layer

The Physical layer has two responsibilities: it sends bits and receives bits. Bits come only in values of 1 or 0—a Morse code with numerical value. The Physical layer communicates directly with the various types of actual communication media. Different kinds of media represent these bit values in different ways. Some use audio tones, while others employ state transitions— changes in voltage from high to low and low to high. Specific protocols are needed for each type of media to describe the proper bit patterns to be used, how data is encoded into media signals, and the various qualities of the physical media’s attachment interface.

At the Physical layer, the interface between the DTE and the DCE is identified. The DCE is usually located at the service provider, while the DTE is the attached device. The services available to the DTE are most often accessed via a modem or CSU/DSU.

The following Physical layer standards define this interface:

  • EIA/TIA-232

  • EIA/TIA-449

  • V.24

  • V.35

  • X.21

  • G.703

  • EIA-530

  • High-Speed Serial Interface (HSSI)

start sidebar
Real World Scenario—A Real-World Use of the OSI Model

There are many reasons why you should understand the OSI reference model layers, processes, and functions. A practical reason is troubleshooting. Most network problems occur at the Physical layer. Broken connectors, bad cables, malfunctioning NICs, and bad ports are a few of the most common problems. The fact is, 90 percent of your network problems are usually related to the Physical layer.

After you have corrected the Physical layer problems, it’s time to consider the Data Link layer and the processes and functions that occur there. Bridges and switches are devices that have common everyday problems such as malformed MAC address tables or bad ports.

At the Network layer, your routers can cause many different network problems, including misrouted traffic, dropped packets, and incorrect routing updates. Fixing these problems requires an understanding of the routing process, including routed traffic and routing update traffic.

Routed traffic such as IP packets include a source and destination IP address in the network header, which are used to determine the routing path through network routers. When a router receives a packet, it scans the routing table to find the best match for the destination network port based on the destination IP address. If a route is not found, the packet is returned to the source IP address in the form of a “network destination unreachable” message.

Routing traffic such as RIP or IGRP can also be a source of problems due to misconfiguration on the router or a neighboring router. On Cisco routers, your best bet is to use the show and debug commands to verify proper configuration and to watch the routing protocol updates for the correct update packets.

end sidebar

Now you understand why I called this a primer for internetworks. No doubt all the letters of the alphabet have become one tangled mess in your brain. If you tend toward dyslexia, I am truly sorry. Right now is a great time to take a break. You may also need to go back and review what we’ve discussed before you move ahead.

Data Encapsulation

Now that we have the layers and terminology defined, we’ll take a look at how data logically moves through the layers. Data encapsulation is the process in which the information in a protocol is wrapped, or contained, in the data section of another protocol. In the OSI reference model, each layer encapsulates the layer immediately above it as the data flows down the protocol stack.

The logical communication that happens at each layer of the OSI reference model doesn’t involve many physical connections because the information each protocol needs to send is encapsulated in the layer of protocol information beneath it. This encapsulation produces a set of data called a packet (see Figure 1.19).

click to expand
Figure 1.19: Data encapsulation at each layer of the OSI reference model

Looking at Figure 1.19, you can follow the data down through the model as it is encapsulated at each layer of the OSI reference model. Starting at the Application layer, data is encapsulated in Presentation layer information. When the Presentation layer receives this information, it looks like generic data being presented. The Presentation layer hands the data to the Session layer, which is responsible for synchronizing the session with the destination host. The Session layer then passes this data to the Transport layer, which transports the data from the source host to the destination host. But before this happens, the Network layer adds routing information to the packet. It then passes the packet on to the Data Link layer for framing and for connection to the Physical layer. The Physical layer sends the data as 1s and 0s to the destination host across fiber or copper wiring. Finally, when the destination host receives the 1s and 0s, the data passes back up through the model, one layer at a time. Data de-encapsulation takes place at each of the OSI model’s peer layers.

At a transmitting device, the data encapsulation method is as follows:

  1. User information is converted to data.

  2. Data is converted to segments.

  3. Segments are converted to packets or datagrams.

  4. Packets or datagrams are converted to frames.

  5. Frames are converted to bits.




CCDA. Cisco Certified Design Associate Study Guide
CCDA: Cisco Certified Design Associate Study Guide, 2nd Edition (640-861)
ISBN: 0782142001
EAN: 2147483647
Year: 2002
Pages: 201

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net