|
|
This section discusses some of the key software components used as building blocks for constructing internetwork designs. The most common protocols running over networks today include Novell NetWare, AppleTalk, DECnet Phase V, IBM SNA/LU6.2/NetBIOS, and OSI. However, over the past decade (especially in the backbone environment), network implementers have largely united around a common protocol stack, based on TCP/IP This book, therefore, focuses mainly on the application of IP-based protocols and services. Specific implementation details of the IP protocol suite are discussed in many classic texts, so we will only briefly review them here. The interested reader is strongly urged to read the related TCP/IP references provided at the end of this chapter, specifically [3–6].
Today, IP has become universally accepted as the protocol of choice for internetworking. Back in the early 1980s the protocol debate was not so clear-cut. Xerox Network Systems (XNS) was seen by many as a superior solution, IBM had a whole raft of protocols built around SNA, and the ISO OSI protocol stack appeared as the strategic choice for a number of large organizations and equipment vendors (including Digital Equipment Corporation [DEC]). In the LAN environment proprietary protocols such as Novel's IPX NetWare suite offered better functionality and higher performance than TCP/IP, and AppleTalk was used to support Apple's user-friendly desktop machines. From humble beginnings, IP's widespread adoption is widely attributed to a number of factors, including the following:
Initial funding from the U.S. Department of Defense (DoD)
Rapid free development of services and support by many academic institutions
The development and public domain (PD) distribution of Berkeley UNIX (Free BSD), which included TCP/IP
Free use of the Advanced Research Projects Agency Network (ARPANET), as an experimental WAN created by the U.S. DoD in 1969. The ARPANET adopted IP as its standard and has now evolved into the Internet
IP continues to be successful because it is essentially simple to implement and understand, and no single vendor controls its specifications. Stable specifications and implementations have been available for many years, and practically every serious business application has been ported to run over the IP protocol suite. IP is still considered by many to be somewhat crude and is not without its limitations, including the well-known address space limitations (described in RFC 1296 and RFC 1347), lack of security, and its limited service support for upper-layer protocols and applications. However, it is here and it works. The approach so far has been to fix problems and add functionality as and when required. This has led in recent years to initiatives such as IP version 6 (which takes care of the addressing problem) and IPSec (which takes care of the security problem).
The IP stack comprises a suite of protocols used to connect more computers in the world today than any other. The IP protocol suite actually predates the ISO OSI model and so does not map neatly onto the OSI seven layers, at least not above Layer 4. The OSI model was driven by a large standards body with very ambitious aims and consequently took considerable time and effort to produce; in many areas it was too isolated from the practicalities of real-world implementation. In contrast, IP was originally driven by the needs of the U.S. government and subsequently by a large community dominated by vendors and users. Working implementations appeared early on, giving developers useful feedback on what was possible and what was practical. Most of the advances in IP have been made by individuals and small dynamic working groups, through the publication of Requests For Comments (RFCs). The process of creating and adopting an RFC has proved far quicker than the equivalent procedure in the ISO. IP development is, therefore, considerably streamlined and based on the ability to provide real implementations and demonstrable interoperability. It is not burdened by the academic and perhaps more stringent requirements suffered by OSI.
In several areas OSI protocols are significantly richer in functionality, more efficient, and generally better thought out than their TCP/IP counterparts. Unfortunately, they are also much more complex, difficult to implement, and demand more resources (and were introduced at a time when networking resources were particularly scarce). OSI failed to get a significant install base and has failed to attract either vendors or customers to adopt them in the same way that IP has. It is instructive to use the OSI seven-layer model as a frame of reference when discussing IP. Figure 1.5 shows how IP and its services map onto the layered architecture of OSI. Essentially, IP protocols and services start above Layer 2 and typically sit on top of a media service (such as FDDI, Ethernet, or a wide area stack such as Frame Relay).
Figure 1.5: TCP/IP model in context.
The following list summarizes five of the layers:
Application Layer—The Application Layer consists of services to assist applications that peer over the network. As described previously, in the IP world user applications typically have OSI session, presentation, and application services built in, so it is hard to differentiate between user application and services above the Transport Layer. For this reason it is commonplace to see TCP/IP stack models with user applications placed at Layers 5 through 7. Example applications include the file-transfer utilities (FTP and TFTP), electronic mail (SMTP), remote virtual terminals (Telnet), and smaller utilities such as the finger program.
Transport Layer—The Transport Layer provides end-to-end data delivery. The OSI model's Session and Transport Layers fit into this layer of the TCP/IP architecture. The concept of OSI's session connection is comparable to TCP/IP's socket mechanism. A TCP/IP socket is an end point of communications composed of a computer's address and a specific port on that computer. OSI's Transport Layer has an equivalent in TCP/IP's TCP. TCP provides for reliable data delivery and guarantees that packets of data will arrive in the order they were sent, with no duplicates and with no data corruption.
Network Layer—The Network Layer defines the datagram and handles the routing of datagrams. The datagram is the packet of data manipulated by the IP protocol. A datagram contains the source address, destination address, and data, as well as other control fields. This layer's function is equivalent to that of the OSI's Network Layer. IP is responsible for encapsulating the underlying network from the upper layers. It also handles the addressing and delivery of datagrams.
Data Link and Physical Layer—TCP/IP does not define the underlying network media and physical connectivity; what is running below Layer 3 is largely transparent. IP makes use of existing standards provided by such organizations as the EIA and Institute of Electrical and Electronics Engineers (IEEE), which define standards such as 802.3, Token Ring, RS232, and other electronic interfaces used in data communications.
The movement of an IP datagram through the various layers of a TCP/IP stack is shown in Figure 1.6.
Figure 1.6: Packet flow through the TCP/IP layers.
When a message is sent by an application, it is passed to the Transport Layer (in this example TCP), where the transport header is added (which includes socket information to indicate source and destination transport addresses). Once transport processing is complete, the message is passed to the Network Layer (IP), where the Internet header is added (which includes source and destination IP addresses). The message is eventually passed down to the Physical Layer, where a MAC header is added (which includes source and destination MAC addresses). The message can then be transmitted to a peer system as a frame of bytes, comprising both data and protocol headers. When a frame is received by the peer system, the process is reversed, and the peer application eventually receives the intended data.
The Data Link Layer is not strictly part of the TCP/IP protocol suite; however, since it underpins the IP layer, it is important to understand some of the key elements involved. As illustrated in Figure 1.5, the Data Link Layer is divided into two sublayers: the Medium Access Control (MAC) layer lies below a Logical Link Control (LLC) layer.
The MAC sublayer addresses the requirement for upper-layer insulation from various media types. For example, in an IEEE 802.3/Ethernet LAN environment, features such as error detection, framing, collision handling, and binary backoff are handled at this level. Source and destination station addresses (sometimes called physical addresses) are also contained within the MAC header. Many of the standards for local area networking at this level are standardized by the IEEE rather than the IETE
The LLC sublayer defines services that enable multiple higher-level protocols such as IP, IPX, XNS, or NetBIOS to share a common data link. There are three classes of LLC, depending upon the quality of service required, as follows:
LLC Type 1—offers a simple best-effort datagram service. There is no error control, no sequencing, no flow control, or buffering. LLC 1 merely provides source and destination LSAPs for multiplexing and demultiplexing higher-layer protocols.
LLC Type 2—offers a connection-oriented service and is a superset of LLC 1. LLC 2 features sequence numbering and acknowledgments, buffering, and separate data and control frames. LLC 2 is based on HDLC, designed to run over less reliable point-to-point links.
LLC Type 3—offers a semireliable service with fewer overheads than LLC 2.
LLC Type 1 is the most common implementation in the LAN environment, since higher-level protocols such as TCP are expected to provide guaranteed services, and the physical medium is generally highly reliable. LLC Type 2 is commonly used to support applications that do not offer complete reliability at the Transport Layer, such as older IBM-style services (it is even possible to run X.25 over LANs via LLC Type 2). LLC Type 3 is primarily used on Token Bus for process control applications.
LLC defines three one-byte fields: the Destination Service Access Point (DSAP), the Source Service Access Point (SSAP), and the Control (CTL) field. The two SAP fields are used to identify the next higher-layer protocol above LLC for peer devices. The CTL field is used to indicate the frame type, and can be either Unnumbered Information (UI), Exchange Identification (XID), or test frames. Note that with LLC Type 2 the CTL field may be one or two bytes long, depending upon whether sequence numbers are required. For further information on the Data Link Layer the interested reader is referred to reference [7].
The Internet Layer corresponds to Layer 3 on the OSI model. It defines the datagram and handles the routing of those datagrams. IP is the most important protocol of the TCP/IP protocol suite, because it's used by all other TCP/IP protocols and all data must flow through it. IP is also considered the building block of the Internet. Some of the key components are described in the following text.
IP is a connectionless Network Layer protocol, which means that no end-to-end connection or state is required before data are transmitted (and there are no sequence and acknowledgment numbers to maintain). This is in contrast to a connection-oriented protocol that exchanges control information between hosts to establish a reliable connection before data are transmitted. IP does not guarantee reliable data delivery; packets could arrive at their destination out of order, duplicated, or not at all. IP relies on higher-level protocols, such as TCP, to provide reliability and connection control.
The basic format of the IPv4 datagram is illustrated in Figure 1.7(a). Each datagram, or packet of data, has a source and destination address. Routing of data is done at the datagram level. As a datagram is routed from one network to another, it may be necessary to break the packet into smaller pieces. This process is called fragmentation, and it's also the responsibility of the IP layer. Fragmentation is required on some internetworks because the many hardware components that make up the network have different maximum packet sizes. IP must also reassemble the packets on the receiving side so that the destination host receives the packet as it was sent.
Figure 1.7: (a) IP version 4, (b) IP version 6 message formats.
A new version of IP is currently being introduced (IP version 6, or IPv6). This version extends the addressing fields of IP and also changes the way Type of Service (ToS) is implemented to assist with Quality of Service (QoS) requirements (see Figure 1-7[b]). For complete details the interested reader is referred to references [4, 6].
Because of the layering of the protocol stack, Physical Layer entities are insulated from Network Layer entities. This means that physical network hardware (such as the Ethernet adapter card in your PC) does not understand how to reach another network-attached system using the remote system's IP addresses, at least not without assistance. In Figure 1.6 the transmitting station on the left does not know initially how to reach the station on the right, since it only has a destination IP address to work with (i.e., there is no destination MAC address in the message because at this stage that is unresolved).
The Address Resolution Protocol (ARP) is used to create a dynamic map of the IP addresses associated with specific physical addresses used by network hardware. ARP operates by broadcasting a message onto the local network, asking for the owner of a certain IP address to respond with its hardware (MAC) address. If the host with the designated IP address is listening, it returns a message to the source, listing its physical MAC address. All other systems that receive the broadcast ignore it. Once the correct addressing details are received, they can be stored locally in an ARP cache, so that future messages can be sent without having to requery the network. Note that ARP operates within a broadcast domain, since broadcasts are not forwarded by routers (at least not by default). For complete details the interested reader is referred to references [4, 6].
The Internet Control Message Protocol (ICMP) is a low-level diagnostic protocol used primarily by the network to report failures or assist in resolving failures. ICMP runs over IP and is an integral part of IP operations. ICMP must, therefore, be implemented by every IP-enabled system. ICMP is widely used to perform flow control, error reporting, routing manipulation, and other key maintenance functions. Network engineers make extensive use of the ping utility, which uses ICMP's echo feature to probe remote IP systems for reachability and response times. A successful response from ping indicates that network routing is operational between the two nodes and that the remote node is alive. For complete details the interested reader is referred to references [4, 6].
IP is responsible for getting datagrams from system to system. The Transport Layer is responsible for delivering those data to the appropriate program or process on the destination computer. The two most important protocols in the Transport Layer related to IP are User Datagram Protocol (UDP) and Transmission Control Protocol (TCP). UDP provides unreliable connectionless datagram delivery; TCP provides a reliable connection-oriented delivery service with end-to-end error detection and correction. To facilitate the delivery of data to the appropriate program on the host computer, the concept of a port is used. In TCP version 4 a port is a 16-bit number that identifies an endpoint for communication within a program. An IP address and port combination taken together uniquely identify a network connection into a process (the socket paradigm developed by the University of California at Berkeley makes more intuitive the use of IP addresses and ports).
Services built on top of UDP or TCP typically listen on well-known ports. These are special reserved ports, which are publicly known and enable clients to access common services on servers without having to interrogate some form of directory service beforehand. For example, clients wishing to connect to a server using the Telnet service simply use the destination port 23.
The User Datagram Protocol (UDP) allows data to be transferred over the network with a minimum of overhead. UDP provides unreliable data delivery; data may be lost, duplicated, or arrive out of order. UDP is, therefore, suitable for applications that either do not require a connection state or cannot guarantee it (such as SNMP). UDP is also very efficient for transaction-oriented applications, when error-handling code resides within the application. It may also be used for applications such as IP multicasting. Figure 1.8 shows the simple format of a UDP header. The header contains a 16-bit source and destination port. For complete details the interested reader is referred to references [4, 6].
Figure 1.8: UDP header format.
Transmission Control Protocol (TCP) is a reliable, byte-oriented, connection-oriented transport protocol, which enables data to be delivered in the order they were sent and intact. TCP is connection oriented, because the two entities communicating must first perform a handshake before data transmission can begin, and the connection state is maintained until explicitly terminated or timed out. The handshake phase is used by the transmitter to establish whether or not the receiver is able to accept data. The flag fields, illustrated in Figure 1.9, are key to connection establishment and release. A connection is initiated with a SYN (S) bit set, responded to with the SYN and ACK (A) bits set, and completed with an ACK bit; hence, the term three-way handshake. Connection release is achieved in the same way, except that the FIN (F) bit is used instead of SYN.
Figure 1.9: TCP message format.
Figure 1.9 shows the format of a TCP header. The header contains a 16-bit source and destination port (as in UDP); however, the header also includes sequence and acknowledgment fields (to ensure that packet ordering and dropped packets are identified). Reliable delivery is implemented through a combination of positive acknowledgments (acks) and retransmission timeouts. TCP also includes a checksum with each packet transmitted. On reception, a checksum is generated and compared with the checksum sent in the packet header. If the checksums do not match, the receiver does not acknowledge the packet, and the transmitter automatically retransmits the packet. TCP is an end-to-end protocol; therefore, the sender relies on feedback from the receiver to implement congestion control and flow control. For complete details the interested reader is referred to references [4, 6].
As we have seen, TCP and UDP support different types of service, and most applications are implemented to use only one or the other (some may sit directly on IP). TCP is used where a reliable stream delivery is required, especially if an application needs to run efficiently over long-haul circuits. UDP is best for datagram services, such as simple best-effort polling applications (SNMP) or multicast distribution from a news or trading system. If you need more reliability with UDP, then this must be built into the application running over UDP. UDP is also useful for applications requiring efficiency over fast networks with low latency. Many network applications are now supported over these transport systems. Some applications (such as Telnet and FTP) have existed since the start of Internet technology. Others (such as X Windows and SNMP) are relatively new. The following is a brief description of the more widely used applications.
Telnet is a widely used virtual terminal protocol, which allows users on a local host to remotely access another host as if they were locally attached. Telnet runs over TCP and is typically invoked by the command-line interface of the host operating system. For example, on the command line the user could type telnet mitch and receive a login prompt from the computer called mitch (alternatively the user could type telnet 193.125.66.2 for example if the IP address of the remote host is known). Implementations of Telnet are available on many operating systems, and interoperability is normally taken for granted. For instance, a Telnet client may be running on a DEC VAX/VMS and a Telnet server on BSD UNIX.
File Transfer Protocol (FTP) runs over TCP and is widely used. The basic operation and appearance are similar to Telnet, but with additional commands to move around directories and send or receive files. The user must be identified to the server with a user ID and a password before any data transfer can proceed.
Trivial File Transfer Protocol (TFTP) is a file transfer application implemented over the Internet UDP layer. It is a disk-to-disk data transfer, as opposed to, for example, the VM SENDFILE command, a function that is considered in the TCP/IP world as a mailing function, where you send out data to a mailbox (or reader in the case of VM). TFTP can only read/write a file to/from a server and, therefore, is primarily used to transfer files among personal computers. TFTP allows files to be sent and received but does not provide any password protection (or user authentication) or directory capability. TFTP was designed to be small enough to reside in ROM and is widely used in conjunction with BOOTP to download operating code and configuration data required to boot a diskless workstation or thin client.
Remote Execution Protocol (REXEC) is a protocol that allows users to issue remote commands to a destination host implementing the REXEC server. The server performs an automatic login on a local machine to run the command. It is important to note that the command issued cannot be an interactive one; it can only be a batch process with a string output. For remote login to interactive facilities, Telnet should be used.
Remote Procedure Call (RPC) is an API for developing distributed applications, allowing them to call subroutines that are executed at a remote host. It is, therefore, an easy and popular paradigm for implementing the client/server model of distributed computing. A request is sent to a remote system (RPC server) to execute a designated procedure, using arguments supplied, and the result returned to the caller (RPC client). There are many variations and subtleties, resulting in a variety of different RPC protocols.
Remote shell ("r" series commands) is a family of remote UNIX commands, which includes the remote copy command, rcp; the remote shell command, rsh; the remote who command, rwho; and others. These commands are designed to work between trusted UNIX hosts, since little consideration is given to security. They are, however, convenient and easy to use. For example, to execute the cc myprog.c command on a remote computer called target, you would type rsh target cc myprog.c. To copy the myprog.c file to target, you would type rcp myprog.c target. To log in to target, you would type rlogin target.
Network File System (NFS) was developed by SUN Microsystems and uses Remote Procedure Calls (RPCs) to provide a distributed file system. The NFS client enables all applications and commands to use the NFS mounted disk as if it were a local disk. NFS runs over UDP and is useful for mounting UNIX file systems on multiple computers; it allows authorized users to readily access files located on remote systems. This enables thin clients or diskless workstations to access a server's hard disk as if the disk were local, and a single instance of a database on a mainframe may be used (mounted) by other mainframes. NFS can be problematic in internetworks, since it can add significant load to a network and is inefficient over slow WAN links.
Simple Mail Transfer Protocol (SMTP) provides a store-and-forward service for electronic mail messages (see [8] for details of these messages). Mail is sent from the local mail application (e.g., a Netscape mail client) to an SMTP server application running on a mail server (e.g., Microsoft Exchange mail server). The server stores the mail until successfully transmitted. In an Internet environment, mail is typically handled by a number of intermediate relay agents, starting at the Internet Service Provider (ISP).
Domain Name System (DNS) provides a dynamic mapping service between host names and network addresses and is used extensively on the Internet.
Simple Network Management Protocol (SNMP) is the de facto network management protocol for TCP/IP-based internetworks. SNMP typically runs over UDP and is used to communicate management information between a network management system (NMS) and network management agents running on remote network devices. The NMS may modify (set) or request (get) information from the Management Information Base (MIB) stored on each network device. Network devices can send alerts (traps) asynchronously to inform management applications about anomalous or serious events. SNMP is covered in detail in Chapter 9.
The X Windows system is a popular windowing system developed by Project Athena at the Massachusetts Institute of Technology (MIT) and is implemented on a number of workstations. The X Windows system uses the X Windows protocol on TCP to display graphical windows on a workstation bitmapped display. X Windows provides a powerful environment for designing the client user interface. It provides simultaneous views of local or remote processes and allows the application to run independently of terminal technology. For example, X Windows can run on OS/2, Windows, DOS, and X Windows terminals.
For further information on TCP/IP and its applications refer to [4, 6]. For detailed internal protocol information about TCP/IP and key applications refer to [5].
|
|