Chapter 6 - Supporting Popular WAN Technologies

Chapter 6: Supporting Popular WAN Technologies  
  Objectives  
  This chapter will look at how Cisco routers are configured for the most popular WAN technologies used today. We will:  
    Look at how the underlying technology works.  
    Examine the Cisco router configuration commands available by looking at examples of how these commands can be used in real-world configurations.
Frame Relay  
  Frame relay is a layer 2 WAN protocol and, as such, does not understand the concept of network numbers. In a LAN, layer 2 addressing is normally defined by MAC addresses. In frame relay, addressing is defined by Data Link Connection Identifiers (DLCI, pronounced "del-see" by those familiar with frame relay implementations). DLCI numbers are not used as destination addresses such as LAN-based MAC addresses. A DLCI number only has significance locally, and not across the frame relay network.  
  Frame relay is a statistical multiplexed service. That means it allows several logical connections (often referred to as channels) to coexist over the same physical connection and allocates bandwidth dynamically among all users of that link. This is in contrast to a Time Division Multiplexed (TDM) service, which allocates a fixed amount of bandwidth to multiple channels on the same line. When frame relay was first introduced, many network engineers thought of it as a cut-down version of X.25. This was because frame relay is similar to X.25 in many ways, except it does not provide the same level of error correction. X.25 was designed as a packet switching network technology at a time when the wide area network links available were mainly analog lines that were prone to periods of poor transmission quality, and that introduced errors into packets transmitted across a network.  
  When frame relay was introduced, more and more digital circuits were available that provided better-quality transmission and, on average, higher reliability. Also, higher-level protocols such as TCP that performed error correction and flow control were becoming increasingly popular. The designers of frame relay decided, therefore, to cut out all the link-level acknowledgments and other overhead associated with X.25 connections that were there to deal with unreliable links.  
  Frame relay performs as much error checking as does Ethernet in that a Frame Check Sequence (FCS) derived from a Cyclic Redundancy Check (CRC) calculation is appended to each frame sent, which is checked by the receiving station. This FCS allows a receiving station to disregard a frame that has been altered in transit; however, neither frame relay nor Ethernet will re-request the damaged frame. Frame relay, therefore, offers better utilization of available bandwidth and is faster at transferring data than X.25, because it does not have to generate, receive, or process acknowledgments.  
  Frame relay does rely on the applications communicating over a frame relay link to handle error recovery. If an application uses a connection-oriented protocol such as TCP for its Transport layer protocol, TCP will handle error recovery and flow control issues. If an application uses a connectionless protocol such as UDP for its Transport layer protocol, specific programming within the application must be coded to handle error recovery and flow control.  
  X.25 as a specification merely dealt with how a DTE device will communicate with a DCE device and made no mention of how packets would get routed through a network. That's the same with frame relay. Frame relay networks may use a variety of mechanisms to route traffic from source to destination, and each one is proprietary to a vendor or group of vendors. What this means to us is that, typically, a Cisco router will be configured to connect to a public frame relay service without regard as to how the packets that are sent over that network are routed.  
  Frame relay and X.25 both operate using the concepts of a permanent virtual circuit (PVC) and a switched virtual circuit (SVC). An SVC uses a Call Setup procedure to establish a connection from source to destination in the same way a telephone does. A PVC is permanently connected between source and destination and operates in much the same way as a leased line. Although frame relay originally was proposed as a standard for packet transmission over ISDN using SVCs, it has gained far wider acceptance as a WAN technology using PVCs on a shared public network.  
  Let me say first of all that I see little point in implementing frame relay on a private network. Frame relay makes the most sense when the same network is going to be used by many organizations and sharing of the bandwidth between locations is called for. I therefore will restrict the rest of this discussion with the assumption that we are talking about connecting Cisco routers at one or more locations to a public shared frame relay service.  
  Before we look into the details of frame relay, let's look at when frame relay is appropriate and identify some of its pitfalls. Frame relay is ideally suited to networks that fit the following criteria:  
    Many geographically dispersed locations need a permanent connection to a central site, but cannot cost-justify the provision of a leased circuit from each of the remote sites to the central location.  
    The traffic to be transported tends to come in bursts or spurts, rather than in a constant stream.  
    Applications accessed over the frame relay connection use a connection-oriented protocol to handle error recovery and flow control.  
    Unpredictable application response time is not a big issue.  
    Remote sites change location, or new sites are added on a regular basis.  
  If a network's goals match closely with the above, frame relay is a technology to consider. What makes frame relay attractive from the telephone company's point of view is that bandwidth can be allocated among many customers. When viewed statistically for a large population on the frame relay network, this makes sense. The probability that thousands of customers all want to use the network to the full extent at the same time is statistically quite small. Therefore, if everyone takes their proper turn at the bandwidth, the telephone company can sell more bandwidth than is available throughout the network. This enables the telephone company to offer a cheaper service via a shared frame relay network, but one that does not come with the same guarantees of throughput. This cheaper service often is billed at the same rate irrespective of location, and so is particularly attractive for connecting remote sites that may be thousands of miles away.  
  To counter user concerns over throughput guarantees, there is a frame relay feature called the Committed Information Rate (CIR). Prior to implementation of CIR, a user would get, for example, a 64 kbps line into a commercial frame relay service, but the throughput would vary, depending on other customer utilization of the frame relay network. Sometimes the throughput was unacceptable. The CIR guarantees a customer that the throughput on any given link would not drop below the CIR rate. In addition to the CIR, an agreement with a frame relay service provider generally allows a customer to have bursts of traffic up to another specified level, typically the speed of the connection into the frame relay service.  
  Shared frame relay services are certainly a cost-effective way to provide remote branches with occasional access to central e-mail, gateway, and application servers for internal users who can accept occasional slow response times. Frame relay is not appropriate for delivering mission-critical, bandwidth-intensive applications to external customers. If a frame relay service is to be used for an application that needs guaranteed throughput, the CIR must be set to the guaranteed bandwidth needed. This can be as expensive as getting dedicated bandwidth from the same telephone company.  
  There is a final concern I want to share with you before we move on to look at this technology in more detail. In Fig. 6-1, we see a typical frame relay implementation for a company with five remote branches.  
   
  Figure 6-1: Typical frame relay network configuration  
  Router 1 connects the central site with the servers to a commercial frame relay service. Routers 2 through 6 connect remote branches to the same frame relay service. Frame relay is not a broadcast medium, so any routing updates, or SAP updates for IPX traffic, need to be sent point-to-point to each location. This can tax the link from router 1 into the frame relay network as the number of branches grows. It is not uncommon to find 50 percent or more of a central site router's bandwidth into the frame relay service consumed with routing information packets. This situation can be avoided, as we shall see later, but requires careful design and a good knowledge of how the interconnected systems work. Frame relay should never be considered a plug-and-play technology.  
  Frame Relay Terms  
  The first frame relay term you need to understand, the DLCI, already has been introduced. A DLCI is supplied for each end of a PVC by the frame relay provider. A DLCI number is used to identify each PVC defined at a location. At the central location shown in Fig. 6-1, there are DLCI numbers 1, 4, 6, 8, and 10. It is best to think of each DLCI as identifying a pipe that leads to a specific protocol address. For example, DLCI 1 leads to the IP address of Serial 0 on router 2, DLCI 4 leads to the IP address of Serial 0 on router 3, and so forth.  
  If you are implementing a TCP/IP solution over frame relay, remember that frame relay is a layer 2 protocol. All routers connected together via the frame relay cloud are peers. The key difference between a frame relay cloud and a LAN is that the frame relay cloud does not support broadcasts to all destinations as would a LAN. Referring to Fig. 6-1, this implies that all the Serial 0 ports on the routers shown would be in the same IP subnet and all the Ethernet 0 ports shown would have their own subnets. We will look at this in more detail in the section on "Configuring Frame Relay Features."  
  As with most networking technologies, there are several configuration options, and here we will look at the most common.  
  Basic frame relay is depicted in Fig. 6-2 and merely connects two locations together over a PVC. This is not very different from connecting these sites together via a leased line. The only difference is the fact that the bandwidth between the two sites is shared with other users of the frame relay network.  
   
  Figure 6-2: Simple frame relay connectivity  
  In this figure, router 1 is connected to router 2 via a PVC, which is defined within the frame relay cloud as existing from DLCI 1 to DLCI 2. In basic frame relay, other locations within the frame relay cloud could use these DLCI numbers because a DLCI has only local significance. This type of frame relay use is limited and has been superseded by use of what is known as the LMI extensions.  
  LMI stands for Local Management Interface and provides additional frame relay features that include:  
    Use of Inverse ARP to automatically determine the protocol address of the device on the remote end of a DLCI.  
    Simple flow control to provide XON/XOFF-type flow control for the interface as a whole. This is intended for applications communicating over frame relay that cannot use the congestion flags sent by frame relay networks.  
    Multicasting to allow the sender to send one copy of a frame that will be delivered to multiple destinations. This is useful for routing update and address resolution protocol traffic.  
    Virtual Circuit Status messaging to allow LMI-enabled network connections to maintain information about the status of PVCs across the network, notifying other nodes if one node becomes unreachable.  
  Cisco joined with Northern Telecom, Stratacom, and DEC to define an LMI standard to deliver these benefits in anticipation of an open standard for LMI. The other commonly implemented LMI type is now ANSI, which was defined by that standards body some time after the four vendors above released theirs.  
  In addition to the LMI type, you can set the frame relay encapsulation to either Cisco or IETF if you are connecting to another vendor's equipment across a frame relay network. Cisco did not create these additional options to make an open standard proprietary or to make life difficult. Rather, Cisco created these standards in order to deliver benefits to users prior to standards bodies making a workable definition available to those wanting to deploy frame relay networks.  
  Configuring Frame Relay Features  
  As stated previously, frame relay is a technology that connects two end points across a network. These end points are identified within the network by DLCI numbers; the DLCI-to-DLCI connection is known as a PVC. We will examine configuration of a central Cisco router serial interface, first in a frame relay point-to-point connection, then in a multipoint connection, and finally with sub-interfaces. We also will explain how inverse ARP simplifies configuration and why the Split Horizon algorithm is not enabled on frame relay ports (unless sub-interfaces are used).  
  Basic Configuration.     The simplest example of frame relay is that shown in Fig. 6-2. In that case, the frame relay provider assigns a DLCI address of 1 to the location of router 1, and 2 to the location with router 2. Let's look at how we build the configuration for the serial port of router 1.  
  In interface configuration mode for the Serial 0 port of router 1, type the following:  
  Router1(config-int)#ip address 132.3.8.7 255.255.255.0  
  Router1(config-int)#encapsulation frame relay  
  This defines frame relay encapsulation for the Serial 0 port. The only optional argument that can be supplied on this command is ietf, which would configure the interface to use the IETF rather than the Cisco encapsulation. Your frame relay provider will inform you if this argument is necessary.  
  The next configuration entry will be to define the LMI type. This command can have one of three values. The default is Cisco, while optional arguments can specify ANSI or q933a, the latter being the ITU standard. Again, your frame relay provider should let you know which to use.  
  Router1(config-int)#frame-relay lmi-type ansi  
  Next we have to tell the router which destination DLCI should be used to reach a given IP address. This discussion assumes that manual configuration of frame relay maps is necessary; in a subsequent section we will examine how inverse ARP makes this unnecessary. For router 1, the 132.3.8.0 subnet is reachable through the Serial 0 interface, so all packets for that subnet are sent out Serial 0. The frame relay supplier will have set up the PVC to send anything that originates at one end of the PVC out the other end.  
  The potential exists for many PVCs to be associated with one physical interface, so we must tell the serial port which DLCI number to use to get to which protocol address destination. Therefore, we have to tell it that to reach IP address 132.3.8.9, it will use DLCI 1. With this DLCI information, the frame relay network can deliver the packet to its desired destination. This configuration is achieved with the following command:  
  Router1(config-int)#frame-relay map ip 132.3.8.9 1 broadcast  
  The argument broadcast enables router 1 to send broadcast routing updates to router 2 through this PVC. If the Serial 0 port on router 2 were to be configured in the same way, the two routers could communicate via frame relay. With only one PVC, this level of configuration may seem like overkill and it is but the next example will show why it is necessary. Later in this chapter, when we use our three lab routers to configure a test frame relay network, we will see that a properly configured network and interface will remove the need to enter multiple frame-relay map commands.  
  This is all well and good, but it is not taking advantage of one of the main benefits of frame relay, which is the ability to multiplex many links onto one. This feature tends to be useful for a central router that must connect to many remote sites, a situation shown in Fig. 6-1. To enable router 1 in Fig. 6-1 to reach routers 2 through 6, the configuration we have so far can be extended with additional map statements. The first step, however, will be to buy the additional PVCs from the frame relay carrier and obtain the local DLCI numbers that identify PVCs to all the remote locations. Assume we are delivered the following DLCI assignments and IP address allocations:  
  Router  
  Serial 0 IP Address  
 
  DLCI at router 1  
 
  Remote DLCI  
 
  1  
  132.3.8.7  
 
     
 
     
 
  2  
  132.3.8.9  
 
  1  
 
  3  
 
  3  
  132.3.8.8  
 
  4  
 
  5  
 
  4  
  132.3.8.10  
 
  6  
 
  7  
 
  5  
  132.3.8.11  
 
  8  
 
  9  
 
  6  
  132.3.8.12  
 
  10  
 
  11  
 
  To reach all remote routers, router 1 would need the configuration shown in Fig. 6-3. This shows that router 1 has 5 DLCIs configured, and tells it which DLCI to use to get the packets delivered to the appropriate IP address. At first it may seem that, by addressing packets to a DLCI number that is defined at the router 1 location, the packets are not really being sent anywhere. The best way to think of this, though, is to think of each DLCI as a pipe, and as long as router 1 puts the packet in the correct pipe, the frame relay network will deliver the packet to the correct destination.  
  Inverse ARP.     As you can see, configuring all these frame relay map statements can become a bore, especially if you have upwards of 20 locations. Fortunately there is a mechanism that will save us from all this manual effort, and that is Inverse ARP. Inverse ARP works in conjunction with the LMI to deliver enough information to routers attached to a frame relay network, so that no frame relay map statements need to be manually configured.  
     
     
  INTERFACE SERIAL 0  
 
     
  IP ADDRESS 132.3.8.7 255.255.255.0  
 
     
  ENCAPSULATION FRAME RELAY  
 
     
  FRAME-RELAY LMI-TYPE ANSI  
 
     
  FRAME-RELAY MAP IP 132.3.8.9 1 BROADCAST  
 
     
  FRAME-RELAY MAP IP 132.3.8.8 4 BROADCAST  
 
     
  FRAME-RELAY MAP IP 132.3.8.10 6 BROADCAST  
 
     
  FRAME-RELAY MAP IP 132.3.8.11 8 BROADCAST  
 
     
  FRAME-RELAY MAP IP 132.3.8.12 10 BROADCAST  
 
  Figure 6-3: Configuration for router 1 in Figure 6-1  
  Upon startup, the LMI will announce to an attached router all the DLCI numbers that are configured on the physical link connecting the router to the network. The router will then send Inverse ARP requests out each DLCI to find out the protocol address configured on the other end of each DLCI's PVC. In this way, a router will generate its own list of what IP addresses are reachable through which DLCI number.  
  Fully Meshed Frame Relay Networks.     For IP implemented over frame relay networks, the Split Horizon rule is disabled. This allows a central router to readvertise routes learned from one remote location on a serial interface to other remote locations connected to the same serial interface. In Fig. 6-1, this means that all routers will end up with entries in their routing tables for net 1 through net 6.  
  Some other protocols, such as those used in Apple networking, will not allow Split Horizon to be turned off, so routes cannot be readvertised out of the interface from which they were learned. To provide full routing capability across a frame relay network with these protocols requires a separate link from each location to every other location. This type of connectivity is referred to as a fully meshed network (Fig. 6-4).  
   
  Figure 6-4: Fully meshed frame relay network  
  A fully meshed network gives each router a specific DLCI number with which to reach every other router. This scheme does allow complete communication between all locations, but requires a large number of PVCs to be bought from the frame relay provider. As the number of nodes on the network increases, the number of PVCs needed grows exponentially, which can make this technology uneconomic to deploy. Now that Cisco has made sub-interfaces available, however, we have a method to get around this for these Apple protocols.  
  Frame Relay Sub-Interfaces.     The simplest way to deploy sub-interfaces for full remote-branch-to-remote-branch connectivity is to implement sub-interfaces in a point-to-point configuration. Sub-interfaces can be deployed as point-to-point, or multipoint (the default).  
  A sub-interface allows you to effectively split one physical port into multiple logical ports. The advantage this gives is that if you configure the sub-interfaces as point-to-point links, each sub-interface is treated as if it were a separate connection by the layer 3 protocol and each sub-interface will appear as a separate entry in the routing table, with a different subnetwork ID associated with it. A sub-interface configured in multipoint mode behaves the same as the interfaces we have configured so far. Let's look at how a sub-interface configured in point-to-point mode allows us to configure a fully meshed network for protocols that cannot disable Split Horizon, without buying additional PVCs and DLCIs from a frame relay provider.  
  What we want to achieve with a sub-interface is to assign a complete subnet to each PVC, so that a router with multiple PVCs terminating in its serial port (that is, the serial port has multiple DLCI numbers associated with it in the frame relay network) will assign a separate entry in its routing table for each PVC link (Fig. 6-5).  
   
  Figure 6-5: Sub-interfaces on a frame relay connection  
  If router 1 is appropriately configured with sub-interfaces, it will have a separate entry in its routing table for the PVC that goes from itself to router 2, and from itself to router 3. Let's take a look at the configuration of these three routers as shown in Fig. 6-6.  
  This configuration assumes that the default encapsulation and LMI type is in effect, and that Inverse ARP (enabled by default on a frame relay port) is not disabled. For router 1 we have sub-interface 0.1 and 0.2 on separate subnets.  
     
     
 
     
  CONFIGURATION FOR ROUTER 1  
 
     
  INTERFACE SERIAL 0  
 
     
  ENCAPSULATION FRAME RELAY  
 
     
  INTERFACE S0.1 POINT-TO-POINT  
 
     
  FRAME-RELAY INTERFACE-DLCI 1 BROADCAST  
 
     
  IP ADDRESS 164.8.5.1 255.255.255.0  
 
     
  !  
 
     
  INTERFACE S 0.2 POINT-TO-POINT  
 
     
  FRAME-RELAY INTERFACE DLCI 2 BROADCAST  
 
     
  IP ADDRESS 164.8.6.1 255.255.255.0  
 
     
  CONFIGURATION FOR ROUTER 2  
 
     
  INTERFACE SERIAL 0  
 
     
  IP ADDRESS 164.8.5.2 255.255.255.0  
 
     
  ENCAPSULATION FRAME RELAY  
 
     
  !  
 
     
  CONFIGURATION FOR ROUTER 3  
 
     
  INTERFACE SERIAL 0  
 
     
  IP ADDRESS 164.8.6.2 255.255.255.0  
 
     
  ENCAPSULATION FRAME RELAY  
 
     
  !  
 
  Figure 6-6: Router configuration for frame relay sub-interfaces  
  Sub-interface 0.1 is configured for subnet 164.8.5.0 and is associated with DLCI 1. The Serial 0 port on router 2 is configured for an IP address on the same subnet and by the LMI/Inverse ARP process described earlier will map DLCI 14 to IP address 164.8.5.1.  
  Similarly, the sub-interface 0.2 has an IP address configured on the same subnet as the Serial 0 port on router 3. Router 3 also will use the LMI/inverse ARP process to map DLCI 13 to IP address 164.8.6.1.  
  With this configuration, router 1 will have separate entries in its routing table for the subnets used by router 2 and router 3. Router 1 will broadcast routing updates to router 2 and router 3, so that both routers get their routing tables updated with entries for the 164.8.5.0 and 164.8.6.0. subnetwork. This allows router 2 and router 3 to communicate with each other via router 1.  
  This type of configuration makes the serial interface on router 1 appear as multiple interfaces to the layer 3 protocols, but on a Data Link and Physical layer it is considered one interface, with multiple PVCs multiplexed on it by the frame relay network.  
  Configuring a Test Frame Relay Network  
  We will now reconfigure our three lab routers to have one perform the function of a frame relay switch. Two other routers will be configured as remote branches connecting into the frame relay switch. The configuration we will use is shown in Fig. 6-7.  
   
  Figure 6-7: Configuration for a test frame relay network  
  In this configuration, router 1 takes the place of a frame relay cloud. To build this lab environment, the first thing we must do is ensure that the DTE/DCE cables connecting router 1 to router 2 and router 3 are connected with the correct polarity. The goal here is to get both serial ports on router 1 to act as DCE rather than DTE, and the only way to do that in a lab environment using cables instead of CSU/DSU devices to connect routers is to connect the correct end of a DTE/DCE cable into the serial port. Use the show controller serial 0 command after you have connected the DTE/DCE cable to the Serial 0 port. If the output from this command indicates a DTE configuration, use the other end of the cable to connect to router 1. The same goes for the Serial 1 port.  
  The configuration shown in Fig. 6-7 uses the default Cisco frame relay encapsulation and LMI, as well as leaving Inverse ARP functioning. Router 1 is configured as a pure frame relay switch. The following explains all the frame relay entries in this configuration.  
    Global command frame-relay switching enables the frame relay switching process for this router and must be enabled before any interface configuration commands are entered.  
    Command encapsulation frame-relay sets frame relay encapsulation for this interface.  
    Command frame-relay intf-type dce sets the interface as a frame relay DCE device. The default is DTE; therefore routers 2 and 3, which need to be DTE, do not have a corresponding command.  
    Command frame-relay route 17 interface serial1 18 configures the static routing of the switch based on DLCI number. The command syntax configures the router so that any packets inbound on DLCI 17 will be routed out interface Serial 1, on DLCI 18 (note that DLCIs can have a value in the range 16 1007). The values shown are for the Serial 0 port; for the Serial 1 port, packets are inbound on DLCI 18 and routed to Serial 0 on DLCI 17.  
  The configuration for routers 2 and 3 should be self-explanatory by this stage; however, it is worth noting that the frame relay maps are generated through the LMI and Inverse ARP mechanism and require no explicit configuration of either router. Note that IGRP was configured for both router 2 and router 3, and because broadcasts are enabled by default, routing information traversed the frame relay switch, updating the routing tables of router 2 and 3. Therefore, router 2 is able to ping the  
  Ethernet port of router 3 and vice versa. Figure 6-8 shows the output of some interesting frame relay show commands that can be used to view the state of the frame relay network.  
     
     
  SHOW FRAME-RELAY ROUTE COMMAND ON ROUTER 1  
 
     
  INPUT DLCIOUTPUT INTTOUTPUT DLCISTATUS  
 
     
  SERIAL 017SERIAL 118ACTIVE  
 
     
  SERIAL 118SERIAL 017ACTIVE  
 
     
  SHOW FRAME-RELAY PVC FOR ROUTER 1  
 
     
  PVC STATISTICS FOR INTERFACE SERIAL0 (FRAME  RELAY  DCE)  
 
     
  DLCI = 17, DLCI USAGE = SWITCHED, PVC STATUS = ACTIVE, INTERFACE = SERIAL0  
 
     
  OUTPUT 56IN BYTES 4378  
 
     
  DROPPED PKTS 0IN FECN PKTS 0  
 
     
  OUT FECN PKTS 0OUT BECN PKTS 0  
 
     
  OUT DE PKTS 0  
 
     
  LAST TIME PVC STATUS CHANGED 1:00:29  
 
     
  NUM PKTS SWITCHED 53  
 
     
  PVC STATISTICS FOR INTERFACE SERIAL1 (FRAME  RELAY  DCE)  
 
     
  DLCI = 18, DLCI USAGE = SWITCHED, PVC STATUS = ACTIVE, INTERFACE = SERIAL1  
 
     
  OUTPUT 53IN BYTES 4536  
 
     
  DROPPED PKTS 0IN FECN PKTS 0  
 
     
  OUT FECN PKTS 0OUT BECN PKTS 0  
 
     
  OUT DE PKTS 0  
 
     
  LAST TIME PVC STATUS CHANGED 1:03:01  
 
     
  NUM PKTS SWITCHED 56  
 
     
  SHOW FRAME-RELAY PVC FOR ROUTER 2  
 
     
  PVC STATISTICS FOR INTERFACE SERIAL0 (FRAME RELAY DTE)  
 
     
  DLCI = 17, DLCI USAGE = LOCAL, PVC STATUS = ACTIVE, INTERFACE = SERIAL0  
 
     
  OUTPUT 57IN BYTES 4848  
 
     
  DROPPED PKTS 1IN FECN PKTS 0  
 
     
  OUT FECN PKTS 0OUT BECN PKTS 0  
 
     
  OUT DE PKTS 0  
 
     
  LAST TIME PVC STATUS CHANGED 1:05:24  
 
     
  SHOW FRAME LMI OUTPUT FOR ROUTER 2  
 
     
  INVALID PROT DISC 0  
 
     
  INVALID MSG TYPE 0  
 
     
  INVALID LOCK SHIFT 0  
 
     
  INVALID REPORT IE LEN 0  
 
     
  INVALID KEEP IE LEN 0  
 
     
  NUM STATUS MSGS RCVD 409  
 
     
  NUM STATUS TIMEOUTS 1  
 
     
  SHO FRAME MAP OUTPUT FOR ROUTER 2  
 
     
  SERIAL0 (UP): IP 163.4.8.2 DLCI 17(0X11, 0X410), DYNAMIC, BROADCAST, STATUS  
 
     
  DEFINED, ACTIVE  
 
  Figure 6-8: Useful show commands for the test network  
  The first output shown in Fig. 6-8 is for the show frame-relay route command, which is useful when trying to troubleshoot PVC problems. This command tells you which DLCIs are active and where they route from and to on the frame relay router interfaces. The other display for router 1 shows the output for the show frame-relay pvc command, which gives more detailed information on each DLCI operational in the switch. The number of dropped packets gives you an idea of how well the switch is performing its duties clearly the fewer drops, the better.  
  If the number of drops is high, but you notice the FECN and BECN packets increasing more rapidly than usual, it could be an early sign that the network is running out of bandwidth. FECN and BECN packets indicate congestion on the network and tell switches to hold off sending traffic. As long as the packet buffers in switches can hold traffic peaks, packets do not have to be dropped. If too many packets for the available bandwidth or buffers are coming in, however, the switch will drop packets at some stage. The show frame-relay pvc command for router 2 has the same display as discussed above, but only for DLCI 17, the one associated with router 2 in this network.  
  The next two displays show, respectively, the LMI status and the frame relay maps for router 2. The frame relay map is as expected, i.e., router 2 knows that the DLCI presented to it by the switch (DLCI 17) will lead it to IP address 163.4.8.2, which is on router 3.
SMDS: Switched Multimegabit Data Service  
  SMDS, or Switched Multimegabit Data Service, has not yet gained significant market penetration, although it has begun to experience some growth. SMDS was viewed as a stepping stone to ATM, since some of the communications equipment and media are common to the two technologies. As SMDS is not available everywhere, and there is more interest in ATM, SMDS has had a hard time getting into the mainstream.  
  SMDS does, however, have some penetration; if your long-distance carrier is MCI, you may have cause to use this technology. The attraction of SMDS is that it has the potential to provide high-speed, link-level connections (initially in the 1 to 34 Mbps range) with the economy of a shared public network, and exhibits many of the qualities of a LAN.  
  In an SMDS network, each node has a unique 10-digit address. Each digit is in binary-coded decimal, with 4 bits used to represent values 0 through 9. Bellcore, the "keeper" of the SMDS standard, assigns a 64-bit address for SMDS, which has the following allocation:  
    The most significant 4 bits are either 1100 to indicate an individual address, or 1110 to indicate a group address.  
    The next 4 most significant bits are used for the country code, which is 0001 for the United States.  
    The next 40 bits are the binary-coded decimal bits representing the 10-decimal digit station address.  
    The final 16 bits are currently padded with ones.  
  To address a node on the SMDS network, all you need do is put the node's SMDS address in the destination field of the SMDS frame. In this way, SMDS behaves in a fashion similar to Ethernet or Token-Ring, which delivers frames according to MAC addresses. A key difference between SMDS and these LAN technologies, however, is the maximum frame size allowed. Ethernet allows just over 1500 bytes, and Token-Ring just over 4000 bytes, but SMDS allows up to 9188 bytes. These SMDS frames are segmented into ATM-sized 53-byte cells for transfer across the network. A large frame size gives SMDS the ability to encapsulate complete LAN frames, such as Ethernet, Token-Ring, and FDDI, for transportation over the SMDS network.  
  An SMDS network can accept full-bandwidth connections from DS0 (64 kbps) and DS1 (1.544 Mbps) circuits, or an access class, which is a bandwidth slice of a higher-speed link such as a DS3 (45 Mbps). These links terminate at what is known as the Subscriber Network Interface (SNI) and connect to the Customer Premises Equipment (CPE). The SNI typically is an SMDS CSU/DSU device and the CPE in this case will be a Cisco router. These elements are illustrated in Fig. 6-9.  
   
  Figure 6-9: SMDS network components  
  SMDS is based on the 802.6 standard for Metropolitan Area Networks (MAN), which defines a Distributed Queue Dual Bus, and is a connectionless technology. The key difference between SMDS and 802.6 is that SMDS does not utilize a dual-bus architecture; instead, connections are centered on a hub and deployed in a star configuration.  
  SMDS Protocols  
  SMDS has its own layer 1 and layer 2 protocols that specify the physical connections and voltage levels used, along with the layer 2 addresses. These protocols are implemented in the SMDS Interface Protocol (SIP). SIP has three levels, with SIP levels 2 and 3 defining the Data Link layer functions.  
  SMDS carries IP information quite effectively. Let's say IP as a layer 3 (in OSI terms) protocol hands a frame down to the SIP level 3 protocol. SIP 3 will place a header and trailer on this frame to form a level 3 Protocol Data Unit. (This is "layer 3" in SMDS parlance, which is a part of the layer 2 protocol in OSI terms.) This expanded frame has SMDS source and destination addresses that include the aforementioned country codes and 10-digit decimal addresses.  
  The expanded frame is passed down to the SIP level 2 protocol, which cuts this expanded frame into 44-byte chunks, to which get added a header and trailer, enabling reassembly of the frame once the 44-byte chunks have been received by the destination. These 44-byte chunks are termed level 2 Protocol Data Units.  The SIP level 1 protocol provides appropriate framing and timing for the transmission medium in use. The relationship between OSI and SIP layers is illustrated in Fig. 6-10.  
   
  Figure 6-10: SMDS communication layers compared to OSI  
  In the example given above, SMDS is acting like a LAN as far as the IP level is concerned, with the only difference being that, instead of regular ARP and the ARP table, there must be a way of resolving IP addresses to SMDS 10-digit addresses.  
  In SMDS, IP ARP functionality is provided by address groups, which also are referred to as Multiple Logical IP Subnetworks by RFC 1209. Address groups are a group of predefined SMDS hosts that listen to a specific multicast address as well as to their SMDS node address. In this instance, if an IP host needs to find an SMDS address for an IP address in order to send a frame, an ARP query is sent to the address group and the SMDS host with the appropriate address responds with its SMDS address. Routing broadcast updates are distributed the same way across SMDS.  
  Typically, SIP levels 1 and 2 are implemented in the SMDS CSU/DSU, and SIP level 3 is implemented in the Cisco router.  
  Configuring SMDS  
  Because SMDS is not as widely deployed as the other network technologies presented in this chapter, we will examine only one simple configuration. The tasks to complete to configure an interface for connection to an SMDS service are as follows:  
    Define an interface to have SMDS encapsulation.  
    Associate an IP address with this interface to be mapped to the SMDS address supplied by the carrier company.  
    Define the SMDS broadcast address to be associated with the interface, so that ARP and routing broadcasts are passed between SMDS nodes as needed.  
    Enable ARP and routing on the interface.  
  Figure 6-11 shows the Cisco IOS configuration commands that could be appropriate for the serial 0 port of router 1 in Fig. 6-9.  
     
     
  INTERFACE SERIAL 0  
 
     
  IP ADDRESS 164.4.4.3 255.255.255.0  
 
     
  ENCAPSULATION SMDS  
 
     
  SMDS ADDRESS C234.5678.9070.FFFF  
 
     
  SMDS MULTICAST IP E654.5678.333.FFF  
 
     
  SMDS ENABLE-ARP  
 
  Figure 6-11: Sample router configurations for SMDS  
  The following discusses the SMDS-specific commands in the configuration for Serial 0:  
    Command encapsulation smds sets the encapsulation to SMDS for the interface. Note that although the SMDS standard allows a maximum frame size of 9188, the Cisco serial port has buffer constraints that limit it to a frame size (MTU) of 4500. If the MTU size is set to a value greater than 4500 prior to the encapsulation SMDS command, performance problems may occur.  
    Command smds address c234.5678.9010.ffff defines the SMDS address for this interface supplied by the SMDS network vendor. All addresses must be entered in this notation, with dots separating each four digits. Individual node addresses start with the letter "c" and multicast addresses start with the letter "e."  
    Command smds multicast ip e654.5678.3333.ffff defines the multicast address for the access group to which this interface belongs. An optional argument can be used here to associate a secondary IP address with this port and a different SMDS multicast address, so that the same port can belong to two access groups simultaneously.  
    Command smds enable-arp enables the ARP function for SMDS address resolution across the SMDS network. The only restriction on this command is that the SMDS address must already be defined.
X.25  
  X.25 has never been as widely deployed in the United States as it has in Europe, and particularly in the last few years its popularity has declined. It is still an important protocol, however, and we will discuss the basic operation of the protocol and simple X.25 configurations for Cisco routers.  
  X.25 is a packet-switched technology that supports PVCs, SVCs, and statistical multiplexing in the same way that frame relay does. The X.25 standards also define error correction, flow control procedures, and guaranteeing the correct sequencing for delivery of packets. The X.25 specifications only really covered DTE-to-DCE communication, leaving all X.25 routing functions to be defined by vendors of X.25 equipment.  
  The penalty to be paid for all this seemingly good stuff in terms of reliability is performance. The acknowledgments, buffering, and retransmission that happen within the X.25 protocols add latency (especially if there are many hops between source and destination), meaning this protocol provides poor performance for carrying TCP traffic that already handles these functions. If your main interest is in networking TCP/IP protocols and transporting legacy protocols such as IPX, SNA, or NetBIOS over a TCP/IP network, it is unlikely you will deploy X.25. In such situations, X.25 is really a competing technology to TCP/IP. Bearing this in mind, in the example configurations, we will only look at how two IP nodes can communicate across an X.25 network via encapsulation of IP data within an X.25 frame for transmission over an X.25 network (termed tunneling). We'll also examine how to translate X.25 traffic to IP, so that a host using TCP/IP communications can communicate with an X.25 host.  
  X.25 Basics  
  The physical layer of X.25 is described by the X.21 standard, which is a 15-pin connector. X.21bis was defined to utilize the same functions within a 25-pin connector. This was done to leverage the large available pool of 25-pin connectors in the marketplace. The X.21bis standard specifies support of line speeds up to 48 kbps. Just as the V.35 standard only specifies speeds up to 48 kbps but is commonly run at T-1 speeds (1.544 Mbps), the X.21bis standard will also work at faster speeds as long as the cables used are not too long.  
  The X.25 second layer 2 is the Link Access Procedure Balanced (LAPB), which is in many ways similar to HDLC. This protocol is responsible for data transfer between a DTE and DCE, link status reporting and synchronization, and error recovery. The LAPB protocol handles all the reliability issues previously discussed. It is worth introducing the most common layer 2 header functions, and how they fit into establishing a DTE-to-DCE X.25 link.  
  Link establishment uses unnumbered frames; numbered frames are used once the call is established and data is being transferred. The following step sequences, as shown in Fig. 6-12, can be viewed on a protocol analyzer that can decode X.25 frames. It is assumed that the link is using LAPB, rather than the older LAP protocol. LAP had more stages for establishing a call, and starts communication with a Set Asynchronous Response Mode (SARM) instead of a Set Asynchronous Balanced Mode (SABM). You need to make sure that both ends of the link are using the same protocol at this level.  
   
  Figure 6-12: X.25 DTE-to-DCE flow control sequence  
  1.   The normal operation is for the DCE device to be sending out DISC (disconnect) frames at a time interval determined by its T-1 timer setting. This indicates that it is ready to receive a call.  
  2.   The DTE device will initialize the link with one command, the SABM, which initiates the DCE sending back an Unnumbered Acknowledgment.  
  3.   The DTE starts sending data using information frames, each of which is acknowledged by RR frames. RR frames are sent by the DCE and indicate that the receiver is ready for more information. If the DCE has information to send to the DTE, it can do this using the same RR frame that acknowledged receipt of DTE data. If the RR frame contains data, it is known as a piggyback acknowledgment.  
  4.   If the DTE sends enough information to fill the buffers of the DCE, the DCE will send back a Receiver Not Ready (RNR) frame. Upon receipt of a RNR frame, the DTE will remain idle for the length of its T-1 timer, then poll the DCE to see if it is ready.  
  5.   The DCE will respond to a poll with more RNR packets until it has space in its buffer to take more data, at which time it will send an RR frame to allow the DTE to start sending more Information frames.  
  6.   The link is cleared by the DTE sending a DISC (disconnect frame), which is responded to by an Unnumbered Acknowledgment.  
  The third layer protocol in X.25 is called the Packet Layer Procedure (PLP) and provides the procedures for the control of virtual circuits within the X.25 network. The lower two layers have local significance, whereas the packet level has end-to-end significance, as illustrated in Fig. 6-13.  
   
  Figure 6-13: X.25 level-three protocols have end-to-end significance  
  X.25 assigns Logical Channel Numbers (LCNs) to the multiple logical connections to a DTE that can exist over one physical link. In this respect, LCNs are similar to frame relay DLCI numbers when the frame relay interface is in point-to-point mode. LCNs do not have end-to-end significance; rather, they are only used between a specific DTE/DCE pair. This means that the same LCN number can exist in many places on an X.25 network without causing problems. In theory, an LCN can have a value between 0 and 4096; rarely, however, is there the ability or the need to configure more than 255 LCNs on one physical link. LCNs must be assigned to a DTE with some care, however. A call collision would result if both the DTE and DCE were trying to initiate calls on the same LCN. The LCN that most people refer to is actually made up of a Logical Channel Group Number and Logical Channel Number. If an X.25 network provider allocates you a 12-bit LCN, you have both LCGN and LCN. If the provider gives you a 4-bit LCGN and an 8-bit LCN, combine the two with the LCGN at the front to get a 12-digit LCN.  
  Figure 6-14 illustrates two hosts and a printer that have unique X.25 addresses having significance throughout the X.25 network. These addresses,  known as Network User Addresses (NUAs), conform to the X.121 recommendation for public data networks, which specifies an address length of 14 digits, with 12 being mandatory and 2 being optional. The first four digits are known as the data network identification number (DNIC); the first three identify the country, while the fourth identifies the network within the country. The next eight digits are the national number, and the last two are optional subaddressing numbers allocated by the user, not the network provider.  
   
  Figure 6-14: Two hosts and one printer connected to an X.25 WAN  
  Configuring an X.25 Network Connection  
  In this section we will use the three lab routers used in the frame relay example. One router will be configured as an X.25 router to emulate an X.25 network. The other two will connect as X.25 DTE devices, and will encapsulate IP traffic within X.25 to transport IP traffic over the X.25 network. This configuration is illustrated in Fig. 6-15.  
   
  Figure 6-15: Configuration of a test X.25 network  
  The key configuration tasks are:  
  1.   Enable X.25 routing on router 1 and configure it to route packets between router 2 and router 3 based on X.25 address.  
  2.   Assign each serial interface connected via the X.25 network an IP address from the same IP network (or subnetwork) number.  
  3.   Assign X.25 NUAs to the serial ports of router 2 and router 3.  
  4.   Configure router 2 and router 3 for a routing protocol and enable routing protocol updates to traverse the X.25 network.  
  This configuration establishes between router 2 and router 3 a virtual circuit that allows IP traffic to be encapsulated within X.25, routed through the X.25 network, and delivered to the correct IP address. The configurations for these routers are shown in Fig. 6-16, and the outputs of the show commands to be examined are illustrated in Fig. 6-17.  
     
     
  Configuration for router 3  
 
     
  !  
 
     
  interface Ethernet0  
 
     
  ip address 200.2.2.1 255.255.255.0  
 
     
  !  
 
     
  interface Serial0  
 
     
  ip address 193.1.1.2 255.255.255.0  
 
     
  encapsulation  x25  
 
     
  x25 address 21234567894  
 
     
  x25 map ip 193.1.1.1 21234554321 broadcast  
 
     
  !  
 
     
  interface Serial1  
 
     
  ip address 160.4.5.1 255.255.255.0  
 
     
  shutdown  
 
     
  !  
 
     
  router igrp 12  
 
     
  network 200.2.2.0  
 
     
  network 193.1.1.0  
 
     
  !Configuration for router 2  
 
     
  !  
 
     
  interface Ethernet0  
 
     
  ip address 200.1.1.1 255.255.255.0  
 
     
  !  
 
     
  interface Serial0  
 
     
  ip address 193.1.1.1 255.255.255.0  
 
     
  encapsulation x25  
 
     
  x25 address 21234554321  
 
     
  x25 map ip 193.1.1.2 21234567894 broadcast  
 
     
  !  
 
     
  interface Serial1  
 
     
  ip address 160.4.5.2 255.255.255.0  
 
     
  shutdown  
 
     
  clockrate 64000  
 
     
  !  
 
     
  router igrp 12  
 
     
  network 200.1.1.0  
 
     
  network 193.1.1.0  
 
     
  !  
 
     
  Configuration for router 1  
 
     
  !  
 
     
  x25 routing  
 
     
  !  
 
     
  interface Ethernet0  
 
     
  ip address 200.1.1.1 255.255.255.0  
 
     
  shutdown  
 
     
  !  
 
     
  interface Serial0  
 
     
  no ip address  
 
     
  encapsulation x25 dce  
 
     
  clockrate 19200  
 
     
  !  
 
     
  interface Serial1  
 
     
  no ip address  
 
     
  encapsulation x25 dce  
 
     
  clockrate 19200  
 
     
  !  
 
     
  x25 route 21234554321 interface Serial0  
 
     
  x25 route 21234567894 interface Serial1  
 
     
  !  
 
  Figure 6-16: Router configurations for text X.25 network  
  Let's look at the configurations first. Router 3 and router 2 have similar configurations, so I will give a full explanation only of the configuration for the serial 0 port of router 3.  
    Command encapsulation x25 sets the frame encapsulation to X.25 in the default DTE mode. Modification to the X.25 encapsulation type can be specified with optional arguments to this command; for example, ietf could follow this command to specify the IETF type X.25 framing.  
    Command x25 address 21234554321 assigns this port the specified NUA. The NUA is used by the X.25 routers to route traffic from source to destination. This number normally will be assigned by the X.25 network provider.  
    Command x25 map ip 193.1.1.1 21234554321 broadcast tells router 3 that whenever it needs to reach IP address 193.1.1.1, it will send packets across the X.25 network to NUA 2122344321. The broadcast argument enables forwarding of broadcast routing updates.  
  The rest of this configuration should be self-explanatory by now. Router 1 is emulating a public X.25 network by performing routing based on X.25 NUAs. The pertinent commands that configure X.25 on router 1 are explained below:  
    Command x25 routing globally enables routing based on X.25 NUA on the router.  
    Command encapsulation x25 dce configures the serial ports to be X.25 DCE devices. At the physical level, the cable end plugged into the serial port configures the serial port to be a Physical layer DCE, as shown by the show controllers serial 0 command. X.25 requires an X.25 assignment of DTE and DCE also. Clearly, a DTE needs to connect to a DCE, because leaving both connected ports as the default DTE would not enable the line protocol to come up. You can view the X.25 DTE/DCE assignment with the show interface serial 0 command, as shown in Fig. 6-17.  
    Command x25 route 21234554321 interface serial 0 tells the router to send traffic destined for this NUA (21234554321) out of port Serial 0. Configuration commands such as these are used to build the X.25 routing table, as shown by the show x25 route command in Fig. 6-17.  
  The result of this configuration is that from router 2, you can enter the command ping 200.2.2.1 and successfully ping the Ethernet port of router 3 across the X.25 network. Note that IGRP updates the routing tables of all IP enabled routers in this network.  
     
  Show commands for router 1  
  router1#sho x25 route  
  NumberX.121CUDForward To  
  121234554321Serial0, 1 uses  
  2 21234567894Serial1, 0 uses  
  router1#sho x25 vc  
  SVC 1024, State: D1, Interface: Serial1  
  Started 1:13:09, last input 0:00:25, output 0:00:07  
  Connects 21234567894  21234554321 to Serial0 VC 1  
  Window size input: 2, output: 2  
  Packet size input: 128, output: 128  
  PS: 5 PR: 5 ACK: 5 Remote PR: 4 RCNT: 0 RNR: FALSE  
  Retransmits: 0 Timer (secs): 0 Reassembly (bytes): 0  
  Held Fragments/Packets: 0/0  
  Bytes 3076/3076 Packets 61/61 Resets 0/0 RNRs 0/0 REJs 0/0 INTs 0/0  
  SVC 1, State: D1, Interface: Serial0  
  Started 1:13:22, last input 0:00:20, output 0:00:38  
  Connects 21234567894  21234554321 from Serial1 VC 1024  
  Window size input: 2, output: 2  
  Packet size input: 128, output: 128  
  PS: 5 PR: 5 ACK: 4 Remote PR: 5 RCNT: 1 RNR: FALSE  
  Retransmits: 0 Timer (secs): 0 Reassembly (bytes): 0  
  Held Fragments/Packets: 0/0  
  Bytes 3076/3076 Packets 61/61 Resets 0/0 RNRs 0/0 REJs 0/0 INTs 0/0  
  Show commands for router 2  
  router2>show x25 vc  
  SVC 1, State: D1, Interface: Serial0  
  Started 1:17:03, last input 0:00:04, output 0:00:05  
  Connects 21234567894  
  ip 193.1.1.2  
  cisco cud pid, no Tx data PID  
  Window size input: 2, output: 2  
  Packet size input: 128, output: 128  
  PS: 0 PR: 0 ACK: 7 Remote PR: 0 RCNT: 1 RNR: FALSE  
  Retransmits: 0 Timer (secs): 0 Reassembly (bytes): 0  
  Held Fragments/Packets: 0/0  
  Bytes 3214/3214 Packets 64/64 Resets 0/0 RNRs 0/0 REJs 0/0 INTs 0/0  
  router2>sho x25 map  
  Serial0: X.121 21234567894  ip 193.1.1.2  
  PERMANENT, BROADCAST, 1 VC: 1*  
  Show command for router 3  
  router3#show x25 vc  
  SVC 1024, State: D1, Interface: Serial0  
  Started 1:19:38, last input 0:00:08, output 0:01:16  
  Connects 21234554321  
  ip 193.1.1.1  
  cisco cud pid, no Tx data PID  
  Window size input: 2, output: 2  
  Packet size input: 128, output: 128  
  PS: 1 PR: 2 ACK: 1 Remote PR: 1 RCNT: 1 RNR: FALSE  
  Retransmits: 0 Timer (secs): 0 Reassembly (bytes): 0  
  Held Fragments/Packets: 0/0  
  Bytes 3260/3306 Packets 65/66 Resets 0/0 RNRs 0/0 REJs 0/0 INTs 0/0  
  router3#show x25 map  
  Serial0: X.121 21234554321  ip 193.1.1.1  
  PERMANENT, BROADCAST, 1 VC: 1024*  
  router3#sho int s0  
  Serial0 is up, line protocol is up  
  Hardware is HD64570  
  Internet address is 193.1.1.2 255.255.255.0  
  MTU 1500 bytes, BW 1544 Kbit, DLY 20000 usec, rely 255/255, load 1/255  
  Encapsulation X25, loopback not set  
  LAPB DTE, modulo 8, k 7, N1 12056, N2 20  
  T1 3000, interface outage (partial T3) 0, T4 0  
  State CONNECT, VS 3, VR 3, Remote VR 3, Retransmissions 0  
  Queues: U/S frames 0, I frames 0, unack. 0, retx 0  
  IFRAMEs 91/91 RNRs 0/0 REJs 0/0 SABM/Es 179/1 FRMRs 0/0 DISCs 0/0  
  X25 DTE, address 21234567894, state Rl, modulo 8, timer 0  
  Defaults: cisco encapsulation, idle 0, nvc 1  
  input/output window sizes 2/2, packet sizes 128/128  
  Timers: T20 180, T21 200, T22 180, T23 180, TH 0  
  Channels: Incoming-only none, Two-way 1-1024, Outgoing-only none  
  RESTARTs 1/1 CALLs 1+0/0+0/0+0 DIAGs 0/0  
  Last input 0:00:54, output 0:00:54, output hang never  
  Last clearing of "show interface" counters never  
  Output queue 0/40, 0 drops; input queue 0/75, 0 drops  
  5 minute input rate 0 bits/sec, 0 packets/sec  
  5 minute output rate 0 bits/sec, 0 packets/sec  
  2293 packets input, 45539 bytes, 0 no buffer  
  Received 0 broadcasts, 0 runts, 0 giants  
  208 input errors, 1 CRC, 0 frame, 3 overrun, 10 ignored, 1 abort  
  2254 packets output, 43027 bytes, 0 underruns  
  0 output errors, 0 collisions, 60 interface resets, 0 restarts  
  0 output buffer failures, 0 output buffers swapped out  
  156 carrier transitions  
  DCD=up DSR=up DTR=up RTS=up CTS=up  
  Figure 6-17: The show commands for the test X.25 network  
  Cisco routers also can perform protocol translation, and in this case we will look at translating between IP and X.25. This feature is useful if a third party provides an X.25 connection and sends you X.25-encapsulated data, but you want to deliver it to a host that uses only the TCP/IP communications protocol suite. This can be achieved by connecting a cisco router between the TCP/IP and X.25 hosts. We configure one port to establish an X.25 connection with the X.25 network and a separate port to communicate using IP. The router is then enabled to route both IP and X.25 traffic. The key part to the configuration is to tell the router to translate between IP and X.25.  
  Let's say that an IP host needs to initiate communications with an X.25 host with an NUA of 1234567890. What we do is associate the IP address 195.1.1.1 with the X.25 address 1234567890 within the translation router. Then if the IP host needs to send data to a machine it thinks is at IP address 195.1.1.1, the translation router will establish an IP connection to the IP host and pretend to be 195.1.1.1, then establish an X.25 session to the X.25 host and deliver the message to it in X.25 format. The configuration to do this is given in Fig. 6-18 and the X.25 command is explained fully as follows.  
    Command x25 routing enables the router to perform routing based on X.25 address.  
    Command translate tcp 195.1.1.1 binary stream x25 1234567890 takes packets destined for IP address 195.1.1.1, takes the data out of them, repackages this data in X.25, and then sends the translated packets to host 1234567890.  
    Command x25 route 1234567890 serial0 tells the router which interface to use to get to the X.25 address 1234567890.  
  It is necessary to give the serial port of router 1 in Fig. 6-18 an IP address in the same network as the IP address that is being translated so that the router accepts the packet prior to it being translated.  
   
  Figure 6-18: Network configuration for X.25 IP translation  
  Viewing X.25 Connection Status  
  We will now examine the show commands illustrated in Fig. 6-17. The first command, show x25 route shows the X.25 routing table for router 1. This display numbers each entry starting at 1 in the first column, then associates each X.121 address (otherwise referred to as the NUA) with the appropriate serial interface. This display states, for example, that the X.121 address 21234554321 can be reached via port Serial 0.  
  The show x25 vc command shows information about the active virtual circuits on this router. In this case, two SVCs have been established, one with ID 1024 and one with ID 1. The normal state for this type of connection is as shown, D1. This command is useful for seeing statistics on the general status of the virtual circuit, such as the number of Receiver Not Ready and Frame Reject (REJ) packets counted.  
  The show x25 map command shows any X.25 mappings defined for the router. In this case, router 2 has the Serial 0 port mapped for the X.121 address 212234567894, for IP address 193.1.1.2. This map is identified as a permanent mapping, meaning that it was configured with the X.25 map interface command and will forward broadcast datagrams destined for the IP host.  
  The show interface serial 0 command shows useful summary information on the link. Primarily this command shows whether the port is up (physical connection is okay) and if the protocol is up (the line is synchronized). Beyond that, the key elements of the display are as follows:  
    The LAPB DTE identifies this serial port as a DTE configuration, which will work only if the serial port of the corresponding device to which it is connected has a LAPB DCE configuration.  
    The modulo 8 indicates that this interface is not using extended addressing. If it were using extended addressing, this entry would show modulo 128. If needed, this parameter will be defined by the X.25 network provider.  
    The N1 12056 indicates the maximum layer 2 frame size. Under the X.25 DTE section, the packet size (at the X.25 layer three level) is given as 128.  
    The N2 20 indicates that the interface will allow 20 retries before timing out.  
    The T1 3000 indicates the T1 timer has a value of 3 seconds. When a command frame is sent, the sender will start this T1 timer. The transmitter will wait for the T1 timer to expire before it receives an acknowledgment of the command frame. If this timer expires without acknowledgment, the sender polls its destination to demand an immediate response. If a response is received, the unacknowledged packet is resent; if there is no response, the link is reset.  
    The X25 DTE entry shows the X.121 address and normal operational state of the interface, which is R1.  
    The Input/Output window size defines the number of packets that may be transmitted before an acknowledgment is received. This value can vary between 1 and 7, with a default of 2.  
  It is important that all these values match between the X.25 DTE and DCE devices. The following discusses how these values can be modified, if necessary.  
  Customizing X.25 Parameters  
  The first customization we will look at is assigning a range of numbers to be used by each type of virtual circuit. To recap, there are PVCs and SVCs, and communication between a DTE and DCE needs a unique virtual circuit number for each direction of communication. Between any DTE/DCE pair, the assignable values are from 1 to 4095. In the show x25 vc command output of Fig. 6-17, you can see that on router 1, two SVCs are active, one with LCN 1024 and one with LCN 1. SVC 1024 is associated with getting to router 3 and SVC 1 is associated with getting to router 2. These are two-way SVCs, taking the default values for the high and low LCN values. If two more SVCs were defined on this link, they would take the values 1023 and 2.  
  An X.25 network vendor may decide to allocate specific LCN ranges for different types of PVCs, and if this deviates from the default, you will have to alter your ranges. SVCs can be incoming, two-way, or outgoing, and a value can be assigned for the low and high value in each type's range by using the following commands in interface configuration mode:  
    x25 lic valueDefines the low incoming circuit number.  
    x25 hic valueDefines the high incoming circuit number.  
    x25 ltc valueDefines the low two-way circuit number.  
    x25 htc valueDefines the high two-way circuit number.  
    x25 loc valueDefines the low outgoing circuit number.  
    x25 hoc valueDefines the high outgoing circuit number.  
    x25 pvc valueDefines the PVC circuit number.  
  Note in the preceding commands that the word value is substituted with the desired circuit number for that command. If you have to specify circuit number ranges explicitly, you must adhere to the following numbering scheme:  
    PVCs must have the lowest range.  
    The next highest range is assigned to the incoming calls.  
    The next highest range is assigned to the two-way calls.  
    The highest range is assigned to the outgoing calls.  
  X.25 packet sizes also may be customized for both incoming and outgoing packets on an interface. The packet sizes may be changed with the following commands in interface configuration mode:  
    x25 ips bytesTo specify the input packet size in bytes.  
    x25 ops bytesTo specify the output packet size in bytes.  
  X.25 uses a windowing mechanism with a default window size of 2. With reliable links, this default value can be increased throughout the network to improve performance. This is achieved with the following commands:  
    x25 win valueThe value defines the number of packets that can be received without an acknowledgment.  
    X25 wout valueThe value defines the number of packets that can be sent without an acknowledgment.  
  Finally, as previously mentioned, the modulo can be regular (modulo 8), which allows virtual circuit window sizes up to 7, or enhanced (modulo 128), which allows window sizes up to 127 packets. This can be changed with the following command in interface configuration mode.  
    X25 modulo valueThe value is either 8 or 128.  
  X.25 is rarely implemented in a private network these days; straight TCP/IP is far more popular. LAPB, however, does retain some popularity even when a TCP/IP network is implemented. Serial links typically use Cisco's default encapsulation of the Cisco proprietary HDLC. This is an efficient protocol but lacks error recovery. If a network link is experiencing noise interference and is carrying large amounts of UDP data, changing the encapsulation in the link from the default HDLC to LAPB will provide error recovery at the link layer, where before there was none.
Point-to-Point Protocols  
  In this section we look briefly at the older Serial Line Interface Protocol (SLIP), the newer Point-to-Point Protocol (PPP) that largely replaced it, and give an overview of HDLC. These protocols all provide connectivity in point-to-point connections; they do not provide multipoint communication, as does frame relay, or full routing functionality, as does X.25.  
  SLIP Communications  
  SLIP, as its name suggests, supports point-to-point communication for IP only. It was the first point-to-point protocol to do so for asynchronous connections and still has widespread support from many Unix vendors. The SLIP frame is very simple: There is no header, no error checking, and no option to define a protocol. (It is always assumed that IP is the higher-layer protocol.) SLIP is mainly used for asynchronous dial-up connections. Even for this limited application, SLIP is generally found wanting these days; there is no authentication process available, for example, which is a common requirement for security-minded organizations implementing a dial-up facility.  
  SLIP works only on asynchronous lines, and does not support address negotiation as part of the link startup procedure. The only time you are likely to need to use SLIP is when connecting to a third party's Unix machine via dial-up. Any new dial-up facilities being implemented should use PPP. In this instance, you will be provided with an IP address by the third-party administrator to use for the dial-up connection. The following is a simple configuration for an asynchronous port using SLIP:  
  Interface Async 1  
  encapsulation slip  
  ip address 193.1.1.1 255.255.255.0  
  This configuration may need to be enhanced with modifications to default hold queue lengths, buffers, and packet switching mode; we will address these issues in the discussion of PPP.  
  PPP Communications  
  PPP is the more modern point-to-point protocol; key features are that it supports the simultaneous running of multiple protocols over one link, synchronous as well as asynchronous communications, dynamic address assignment, and authentication. PPP is a layer 2 (Data Link) protocol; there is, however, a whole link negotiation process that must be completed before two nodes can exchange information at the Data Link level. Let's examine this negotiation process as a set of phases.  
  The first phase comprises exchange of Link Control Protocol packets. This is like an initial introduction. The two ends agree on the general characteristics of the communication that is about to take place, such as the use of compression or the maximum frame size. An optional phase checks the line quality to see if it can be used to bring up the Network layer protocols. Once the link configuration is agreed upon, an optional authentication process may take place, via either the PAP or CHAP protocol.  
  The Password Authentication Protocol (PAP) was the first authentication protocol deployed for PPP. To explain the way it works, imagine you have a remote PC using a dial connection to connect to a central router. In this setup, you configure a PAP username and password in the remote PC, which matches a username and password configured in the central router. When the remote PC dials the central router, it will start sending its PAP username and password repeatedly until it either is granted a connection, or the connection is terminated. This is a step up from having no authentication, but is open to a break-in method known as modem playback. Using this method, an intruder hooks into the telephone line, records the modem negotiation transmissions, and plays them back later. By doing this method, the intruder has recorded the username and password for future use.  
  The Challenge Handshake Authentication Protocol (CHAP) is a stronger authentication process. With CHAP, the remote PC will initiate a connection, negotiate the LCP parameters, and then be met with a "challenge" from the central router. The challenge comes in the form of an encryption key that is unique for every connection made. The remote PC then will use the encryption key to encode its username and password before submitting it to the central router. When the central router receives the encrypted username and password, it will decrypt them using the key sent for that connection and compare them against the valid usernames and passwords for which it is configured. The connection is either authenticated and progresses, or is terminated at this stage.  
  This method defeats modem playback because the encryption key is different each time; for every connection, it produces a different set of encrypted characters for exchange between remote PC and central router. Within CHAP, there are several different algorithms that may be used for encryption, the most popular being the MD5 algorithm. CHAP's authentication processes typically take place only at link startup time, but the specification allows for this challenge to be issued repeatedly during the lifetime of the connection. Implementation of this feature is up to individual vendors.  
  Once authentication is complete, the next stage involves defining the particular Network Control Protocols to be encapsulated within PPP. The only choices generally available in popular PC PPP stack implementations are IPCP, for encapsulating IP traffic, and IPXCP for encapsulating IPX traffic; however, a PPP session can support both simultaneously. Each NCP protocol will negotiate the specifics of communication for its Network layer protocol. Once the NCP negotiations are complete, the two endpoints can exchange data at the Data Link (layer 2) level for each of the Network (layer 3) protocols configured. Now that we know what PPP does, let's take a brief look at its frame format, as illustrated in Fig. 6-19.  
   
  Figure 6-19: The PPP frame format  
  This frame contains both useful and redundant fields. The redundant fields are the address and control fields, which always carry the same entries. The address field contains an all-ones byte, the layer 2 broadcast address. (Because PPP only connects two entities together, specific layer 2 addressing is not necessary and individual layer 2 addresses are not assigned.) The control field always contains the binary value 00000011, which defines the type of communication used. The useful fields are the flag, to indicate the start of a frame, the protocol field, which identifies the layer 3 protocol encapsulated in the frame, and the Frame Check Sequence to ensure no errors were introduced into the frame during transmission.  
  Asynchronous PPP Configurations.     There is much to be said for the configuration of efficient asynchronous communications, some of which can be considered more art than science. We will examine the most common commands and then consider how these commands can be put together in a sample configuration.  
  The first command, as always when specifying a link layer protocol, is to define the encapsulation for the chosen interface. This is achieved with the following commands:  
  Router1(config)#interface async 1  
  Router1(config-int)#encapsulation ppp  
  If we are specifying PPP encapsulation, this implies that the asynchronous port will be used for a network connection. We therefore should place the port in dedicated mode. The mode choice for an async port is either dedicated or interactive. Placing the port in interactive mode presents the user with a command prompt and allows the user to manually input user name, passwords, and other connection-related information. For security reasons, I prefer to keep the async mode as dedicated, which is achieved with the following command:  
  Router1(config-int)#async mode dedicated  
  Next you will want to enable Van Jacobsen header compression. In reality, compressing headers makes comparatively little difference in link bandwidth consumed by the protocol, but with asynchronous communications you should do everything possible to improve throughput. Header compression is turned on by default, but it does not hurt to enable it in case it had been previously disabled. This is achieved in interface configuration mode:  
  Router1(config-int)#ip tcp header-compression on  
  The next issue is to assign IP addresses to the ports and to computers connecting to those ports. You have a choice, either to hard-code IP addresses into computers connecting to the async ports, or have the address assigned to the computer when it connects to the async port. If you choose the first option, you must ensure that the IP address assigned to the computer dialing in is in the same subnet as the address range assigned to the async ports themselves.  
  My preference is to have the IP address assigned to the computer by the async port upon connection. This makes life simpler and does not restrict a computer to being able to dial in to only one location. To have the async interface assign an IP address to a computer when it connects, three separate configurations need to take place. First the async port must be given an unnumbered address. (IP unnumbered is covered more fully in Chap. 7.) Next, the async port must be configured to deliver a specific IP address to the connecting computer. Finally, the connecting computer must have no IP address configured. The two entries in the router port configuration, to define IP unnumbered and 193.1.1.1 as the address assigned to a connecting computer, are as follows:  
  Router1(config)#interface async1  
  Router1(config-int)#ip unnumbered ethernet 0  
  Router1(config-int)#async default ip address 193.1.1.1  
  Next we discuss Asynchronous Control Character Maps (ACCMs). Flow control between asynchronous devices can either be of the hardware or the software variety. Hardware flow control relies on pin signaling, such as the state of the Data Set Ready (DSR) or Data Terminal Ready (DTR) pins to stop and start transmission. Software flow control uses special characters transmitted between asynchronous devices to stop and start transmission. When relying on characters transmitted between devices to signal modem actions, there is always a danger that strings within the data transmitted will match these special command strings and be inappropriately interpreted by the modems.  
  An ACCM can be configured to tell the port to ignore specified control characters within the data stream. The value of ACCM that tells an async port to ignore XON/XOFF (software flow control) characters in the data transmitted is A0000 in hexadecimal. This is the default value; if a TCP stack on the computer connecting to the async port does not support ACCM negotiation, however, the port will be forced to use an ACCM of FFFFFFFF. In this case, it is useful to manually set the ACCM with the following command:  
  Router1(config-int)#ppp accm match 000a0000  
  Next, we want to enable CHAP authentication on the link. This is done in two stages; first the CHAP user name and password are set in global configuration, then CHAP is enabled on the desired interface. This is achieved through the following commands:  
  Router1(config)#username chris password lewis  
  Router1(config)#interface async 1  
  Router1(config-int)#ppp authentication chap  
  If an asynchronous router is being used to route traffic from a LAN to a dial-up or other slow link, it can be desirable to slow down the speed at which packets are switched from one interface to another. If packets are switched from an Ethernet port running at 10 Mbps directly to an async port running at 19.2 kbps, the async port can quickly get overwhelmed. By entering the no ip route-cache command as  
  shown below, the packets are switched at a slower speed. Effectively, this command, entered for each async interface in use, stops the router from caching destination addresses and forces a table lookup every time a packet needs to be routed.  
  Router1(config-int)#no ip route-cache  
  One aspect of asynchronous communication that causes endless confusion is the DTE rate configured for a port and its meaning in terms of data throughput on an async line. The receive and transmit DTE rate of async port 1 is set by the following commands, to 38,400 bits per second.  
  Router1(config)#line 1  
  Router1(config-line)#rxpseed 38400  
  Router1(config-line)#txspeed 38400  
  In asynchronous communications, the DTE rate as defined above dictates the speed at which each packet is sent from the router port to the modem. If the modem can only transfer data across a dial-up link at 14.4 kbps, it will use its flow control procedures to stop more packets from coming out of the router port than it can safely transfer across the link. Thus, over the course of 10 or 20 seconds, the amount of data transferred between the router port and the modem port will not be greater than an average of 14.4 kbps; however, each packet that the router does transmit will be received at a speed of 38.4 kbps from the device sending async characters.  
  These days most modems employ V.42bis compression, which will allow a modem to sustain effective throughputs that are higher than the modem-to-modem connection rate. V.42 compression is generally quoted at providing up to four times the data throughput that the connection rate would suggest. For example, with four-to-one compression, a 14.4 kbps link will support 57.6 kbps throughput. The effective compression ratio is determined by how compressible the data being transported is. Compressible data includes things such as ASCII text, although binary file transfers are not normally very compressible.  
  In brief, V.42bis compression looks for common sequences of bits and the modems agree to assign special characters to represent these often-repeated character sequences. By transmitting a special character, the modem may have to transfer only 1 byte of data, rather than the 4 bytes that both modems know it represents. Once a receiving modem receives a special character, it will send out the full associated character string on its DTE port.  
  Many newcomers to the world of asynchronous communications ask why, even if the DTE rate is set to 115,200 bps, communications across an async link are so slow, often slower than an ISDN link operating at 64 kbps. The answer is that you very rarely get sustained throughput of 115,200 on an async link. While each packet may be transferred between the router and modem at 115,200 bps, the modem flow control will stop the router port from transmitting continuously at that speed.  
  Chapter 8 gets into troubleshooting serial communication problems in more depth, but two configuration commands that help asynchronous communications are worth considering here. The first is the hold-queue command.  
  The hold queue of each interface has a specified size, which is the number of packets waiting to be transmitted that it can hold before the interface starts dropping packets. This value can be set for both incoming and outgoing packets. For asynchronous interfaces, it is worthwhile increasing the sizes of both the incoming and outgoing hold queues, which in the following example increases both values to 90.  
  Router1(config-int)#hold-queue 90 in  
  Router1(config-int)#hold-queue 90 out  
  If an interface (Async 1, for example) is exceeding its hold queue limits, an increased number of drops will be seen in the show interface async 1 command. Drops also can increase if the router buffers for given packet sizes are overflowing. The second command we will overview here is the one that sets the number of packet buffers available in the router. To view the state of packet buffer use, enter the show buffers command. The output will show you the number of small, medium, large, very large, and huge buffers used and available, and the number of occasions on which buffers of a type were needed but a packet was dropped because none were available (shown as failures).  
  A point to note is that packets can be dropped even if the maximum number of buffers has not been exceeded. This phenomenon occurs if several packets of one particular size arrive at the router very quickly and the router cannot create buffers fast enough. If you suspect this may be happening, you can set the number of buffers of a given size to be permanently available. The following is an extract from a router configuration that has had its medium-size buffer allocation altered from the default.  
  !  
  buffers medium initial 40  
  buffers medium min-free 20  
  buffers medium permanent 50  
  buffers medium max-free 40  
  The first entry defines the number of temporary buffers that are to be available after a reload, which is useful for high-traffic environments. The second statement forces the router to try to always have 20 medium buffers free, and if a traffic surge reduces the number of free medium buffers to below 20, the router automatically will try to create more. The third entry defines 60 permanent buffers, which once created are not retrieved by the IOS for reuse of their memory allocation. Finally, the max-free 40 entry ensures that memory is not wasted on unused buffers by returning memory used by more than 40 free medium buffers to the router's general memory pool.  
  Synchronous PPP Configurations.     If a WAN is being built based on point-to-point links between router serial ports, the popular choices for the link-level encapsulation are the default Cisco version of HDLC and PPP operating in synchronous mode. PPP is a more modern protocol and offers better link-level authentication than Cisco's HDLC, but there is one compelling reason to build your network based on the Cisco HDLC. Consider Fig. 6-20.  
   
  Figure 6-20: Router serial interfaces connected via CSU/DSU devices  
  Suppose the line protocol on the connected Serial 0 ports will not come up, but everything looks good with the interface configurations, so you want to test the line connecting the two CSU/DSU devices. A good way to start troubleshooting this situation is to first put CSU/DSU 1 in loopback and see if the line protocol on the Serial 0 interface of router 1 comes up, as shown by the show interface serial 0 command. If this is okay, take CSU/DSU 1 out of loopback and put CSU/DSU 2 in loopback, then see if the Serial 0 interface comes up as a result. This is a simple yet powerful technique for locating a problem in a communication system, and works well if the encapsulation for both Serial 0 interfaces is set to default Cisco HDLC. With PPP encapsulation, an interface protocol will not reach an up state when connected to a CSU/DSU in loopback; you therefore do not have this type of troubleshooting as an option.  
  A situation in which you really do need to use synchronous PPP is one in which you decide to use the Cisco 1000 LAN Extender product. This product line was designed to link remote offices to a central location at low cost and with low maintenance. These LAN extenders are not real routers; they are layer 2 bridge devices. Figure 6-21 illustrates a typical configuration for a low-cost remote location installation. We will take some time now to explore configuring one of these devices.  
  In this configuration, the Cisco 1001 has two interfaces connected, a V.35 connector to the CSU/DSU and an Ethernet connection to the local PC. There is no configuration in the 1001; all configuration is kept in the central router, which makes installation and remote support of this device simple enough that even nontechnical staff can connect the device as needed. There are two parts to the configuration of the central router interface, one for the router 1 serial interface and one for the virtual LAN extender, which is configured as a LEX interface on router 1. The Serial 0 interface configuration for router 1 in Fig. 6-21 is given as follows:  
  !  
  interface serial 0  
  no ip address  
  encapsulation ppp  
  !  
  The configuration for the LEX port that also exists on router 1 is as follows:  
  !  
  interface lex 0  
  ip address 195.1.1.1 255.255.255.0  
  lex burned-in-address 0000.0c32.8165  
  !  
   
  Figure 6-21: LAN extender network connections  
  The lex burned-in-address is the MAC address of the Cisco 1001 being installed. When a LAN extender is installed, all the workstations connected to it are given addresses on the same subnet as the LEX port. With the configuration above, the PC in Fig. 6-21 would be addressed 195.1.1.2, while subsequent PCs connected to the 1001 device LAN would have addresses 195.1.1.3, 195.1.1.4, and so forth.  
  As you can see, all configuration that makes the connection unique is under the LEX configuration. This means that you can connect a remote LAN extender to any serial port that has no IP address and PPP encapsulation configuration. When a LEX first connects to the central router, it will try to bind to whatever port to which it is connected. To see the status of a LEX connection, issue the command show interface LEX 0, which will tell you what port the LEX is bound to, the LEX MAC address, its IP address, and the Ethernet encapsulation type. Once you know to what port the LEX is bound, you can use the show interface serial 0 command (assuming the LEX is bound to Serial 0) to see the number of interface resets, carrier transitions, and CSU/DSU signals.  
  The price to pay for this relatively cheap and simple way of connecting remote devices is in the area of advanced troubleshooting information. To report problems, the LEX relies upon a set of front panel lights. Under normal conditions, these lights flicker to indicate traffic passing over the link. During error conditions, the lights blink solidly a number of times. The number of times the lights blink indicates the nature of the problem. It can be a challenge to get nontechnical users to notice the difference between a flicker and a solid blink and to correctly count the number of blinks generated by the LEX. The errors reported by the front panel lights are as follows:  
  One blink
The serial link is down.  
 
  Two blinks
No clock received from CSU/DSU.  
 
  Three blinks
Excessive CRC errors on the line.  
 
  Four blinks
Noisy line.  
 
  Five blinks
CSU/DSU in loopback.  
 
  Six blinks
PPP link negotiation failed.  
 
  SDLC  
  Many of the protocols we have discussed are based on the High-level Data Link Control protocol (HDLC) format, which is related to IBM's Synchronous Data Link Control (SDLC). It is worth briefly looking at how these protocols work and at their interrelationships.  
  SDLC was designed for serial link communications in SNA networks in the 1970s and is in use for that purpose today. SDLC is a bit-oriented protocol; previously, protocols like IBM's BiSync and DEC's DDCMP were byte-oriented. Bit-oriented protocols offer more flexibility, are more efficient, and have now completely replaced byte-oriented protocols for new product development. Once IBM had written SDLC, it was used as a basis for further development by several standards bodies to produce SDLC variants, which are listed as follows:  
    The ISO based its development of the HDLC protocol on SDLC.  
    LAPB was created by the CCITT (now called the ITU, the International Telecommunications Union), which used HDLC as its starting point.  
    The IEEE 802.2 used HDLC as a base for developing its link-level protocols for the LAN environment.  
  It is worth noting that the IBM QLLC protocol can be implemented at the Data Link layer when transporting SNA data over X.25 networks. In this scenario, QLLC and X.25 replace SDLC in the SNA protocol stack.  
  An IBM protocol, SDLC is geared toward everything being controlled by a central processor. SDLC defines a primary end and a secondary end for communications, with the primary end establishing and clearing links and polling the secondary ends to see if they want to communicate with the primary. An SDLC primary can communicate with one or more secondary devices via a point-to-point, multipoint (star), or loop (ring) topology. The SDLC frame format is shown in Fig. 6-22. The HDLC, LAPB, and 802.2 variants of SDLC do not define primary and secondary devices in communication.  
   
  Figure 6-22: SDLC frame format  
  SDLC frames are either Information frames for carrying user data and higher layer protocol information, Supervisory for status reporting, or Unnumbered for initializing an SDLC secondary.  
  HDLC is closely related to SDLC, but instead of one transfer mode, HDLC supports three. HDLC's Normal Response Mode is how SDLC operates, wherein a secondary cannot communicate until the primary gives it permission. The HDLC Asynchronous Response Mode allows a secondary to initiate communication, which is the method used by the X.25 LAP protocol. HDLC also supports Asynchronous Balanced Mode, which is used by the X.25 LAPB protocol and allows any node to communicate with any other without permission from a primary.  
  LAPB operates only in Asynchronous Balanced Mode, allowing either the DTE or DCE device to initiate calls. The device that initiates calls becomes the primary for the duration of that call.  
  The IEEE 802.2 committee split the ISO Data Link layer in two, the upper half of which is the Logical Link Control (LLC) sublayer, and the lower half being the Media Access Control (MAC) sublayer. The 802.2 LLC layer interfaces with layer 3 protocols via Service Access Points (SAPs) and different LAN media, such as 802.3 and 802.5 implemented at the MAC layer. IEEE 802.2 has a similar frame format to SDLC.  
  ISDN  
  ISDN stands for Integrated Services Digital Network; it is a synchronous dial-up service offered by most local telephone companies these days. Unlike many of the other networking technologies we have examined, ISDN is not a packet-switching technology; rather, it is a circuit-switched technology, similar to the plain old telephone service (POTS). My recommendation when implementing ISDN for a single PC is to not use one of the devices that connect to the serial COMx port on a PC. They may be easy to set up, but you will not get the full benefits of ISDN communications. These devices have to use asynchronous communications to exchange data with the PC serial port. You will get better performance if you connect ISDN synchronously all the way to your PC, as the serial port circuitry is not as efficient as a LAN card at getting data on and off your PC's bus.  
  Cisco offers a standard solution for this with the 1000-series LAN extenders. The 1004 LAN extender will take an ISDN BRI circuit in one side and an RJ-45 connector for Ethernet connectivity in the other side. If you then use a crossover RJ-45 cable, a twisted-pair Ethernet card in the PC can be connected directly to the Cisco 1004. This is shown in Fig. 6-23. In fact, the configuration shown in Fig. 6-23 could support an additional 22 remote PCs establishing ISDN connections simultaneously through ISDN to the one PRI connection.  
   
  Figure 6-23: Communicating synchronously from a remote PC to a router via ISDN  
  ISDN Terminology.     With ISDN there is a slew of new terms that might seem daunting at first; in reality they are quite simple, so let's define them first.  
  Basic Rate ISDN (BRI).     A BRI consists of two B channels and one D channel. The B channels each have 64 kbps capacity, with the D channel being used for call signaling and having 16 kbps capacity. The two B (known as bearer) channels can be used for voice or data, and can be combined to provide 128 kbps throughput. BRI access is via the same copper telephone lines that now provide regular telephone service to homes. The attraction of this technology for telephone companies is that it allows higher-speed services to be delivered using the existing cabling. In many locations, however, the existing copper has had to be upgraded to support the encoding necessary to transmit this amount of data over a line.  
  Primary Rate ISDN (PRI).    PRI services are delivered by a T-1 circuit (1.544 Mbps) in the United States and an E-1 (2.048 Mbps) in Europe. In the United States, a T-1 delivering PRI is split into 24 channels of 64 kbps, one of which is used as the D channel for signaling, leaving 23 for voice, data, or bonding together to form a connection of higher than 64 kbps. When a T-1 circuit provides channelized service, 8 kbps is lost to the channelization process, giving an available bandwidth of 1.536 kbps. A typical application for PRI would be to provide 23 central dial ports for remote branches to dial into on an as-needed basis. A PRI can be terminated in an RJ-45 type connector, which can be directly connected to a Cisco router device such as an AS-5200, or a PRI interface on a 7000-series router. This means that you can provide 23 dial connections on one physical interface without having to worry about individual modems and cabling.  
  Link Access Protocol D Channel.     LAPD is a derivative of X.25's LAPB protocol and is used by the ISDN equipment at your site to communicate with the telephone company's central office switch all the while the connection is active.  
  Terminal Endpoint Identifier.     TEI is a unique identifier for each of the up to eight devices that can hang off of an ISDN connection. Typically the TEI will be configured by the network via D channel exchanges, but it may be necessary to configure these identifiers by hand.  
  Q.931.     The D channel uses control messages that conform to the ITU Q.931 specification to establish, terminate, and report on status of B channel connections. It is worth confirming that these control signals pass through a separate D channel (termed out-of-band); if a separate D channel network is not available, an extra 8 kbps is taken out of each B channel, leaving only 56 kbps available for data transmission.  
  SAPI.     Frame relay or X.25 frames as well as control messages can traverse the D channel. It is the Service Access Point Identifier field in the LAPD frame that identifies which type of traffic is using the link.  
  SPIDs and DNs.     A SAPI/TEI pair identifies a particular device on a given ISDN link, but has only local significance. For a call to traverse an ISDN network, network-wide parameters need to be defined. In concept, a SAPI/TEI pair are the same as an LCN in an X.25 network, in that the LCN is significant between a DTE and DCE, but an NUA is necessary to route traffic across an X.25 network. To establish an ISDN connection, you need a directory number (DN), which looks identical to a regular telephone number. With a DN, you know what number to use to call another ISDN user. That is not enough, however. We know that an ISDN connection may have many different types of devices attached, so to inform the ISDN switch in the telephone company's network what you have at your location, the telephone company will assign you one or more service profile IDs (SPIDs) that identify the equipment you are using on your ISDN connection.  
  Switch Types.     You must know the Central Office switch type used by your ISDN provider and configure that in the device connecting to the ISDN circuit. In the United States, the switch type might be AT&T 5ess and 4ess, or Northern Telecom MS-100; in the United Kingdom, it might be Net3 and Net5; and in Germany, it is 1TR6.  
  Network Termination 1 (NT1).     An NT1 is a device that converts ISDN digital line communications to the type used by the BRI interface. Outside of the United States, this is typically supplied by the telephone company, but inside the United States, subscribers generally have to supply their own NT1. In effect, it is the circuitry that interfaces your ISDN connection to the ISDN local loop network.  
  Terminal Endpoint Devices (TE1 and TE2).     A TE1 is a native ISDN interface on a piece of communications equipment, such as a BRI interface on a Cisco router. A TE2 device is one that requires a Terminal Adapter (TA) to generate BRI signals for it. The TA converts RS-232 or V.35 signals in to BRI signals. A Cisco router serial interface needs to be connected to an NT1 via a TA device.  
  Let's see how a typical ISDN call is established by referring to Fig. 6-24, which shows how a TE1 is directly connected to the NT1 device, whereas a TE2 needs a TA device to connect to the ISDN network. Imagine we want to call a digital telephone that has an ISDN phone number of 123-4567. To do this, the originating equipment connects to the telephone company's Central Office switch. The CO switch knows what type of device the originating equipment is through its associated SPID. Next, the number 123-4567 is passed to the switch as the destination. The CO switch will then locate via SPID an appropriate device at the destination number that can accept the call. A D channel conversation takes place and a B channel is allocated from source to destination to transport data between the two ISDN devices.  
  Configuring ISDN BRI Services.     ISDN connections generally are configured as a type of Dial on Demand Routing (DDR). This provides for a dial backup service to a leased line link, a dial connection for occasional  
   
  Figure 6-24: ISDN device interconnections  
  Internet access, or a remote site dialing a central location for occasional server access.  
  The first example is for a Cisco 2501 that has its Serial 0 interface attached to a leased line connected to a central site. The Serial 1 port is connected to a Terminal Adapter device that dials back to the central site if the primary leased line on Serial 0 fails. In this application, some of the configuration commands need to be entered into the Terminal Adapter and some into the router.  
  For the Terminal Adapter configuration, we need to identify the ISDN provider's switch type, the number to dial, and the SPID assigned for the connection. In the router, we need to identify what will cause a dial connection to be made. Typically for this type of application, the router serial port connected to the Terminal Adapter will raise the DTR signal when a connection is needed; the TA will sense this and, having all the information necessary to make the desired connection, will initiate a call. This is the relevant configuration for the Serial 0 and Serial 1 ports:  
  interface serial 0  
  backup delay 10 45  
  backup interface serial 1  
  !  
  interface serial 1  
  ip unnumbered serial 0  
  The command backup delay 10 45 for Serial 0 configures the router to attempt to establish a backup connection if the primary connection loses a carrier signal for more than 10 seconds, and will maintain that backup connection until the primary carrier signal has been constantly up for at least 45 seconds.  
  The command backup serial interface 1 tells the router that Serial 1 is to be used as the backup interface. This command sets DTR low until the port is activated to establish a connection.  
  The only necessary command for the Serial 1 port is the one giving it an IP address, which in this case is an unnumbered IP address, linked to the address of Serial 0. (IP unnumbered will be covered in Chap. 7.) There typically will be an ISDN circuit ready at the central site to receive this call. Full connectivity is restored to the remote location when the ISDN circuit is established and the routing table on the central site router has adjusted so that it knows to get to the remote router via a new interface. (The routing protocol implemented, such as IGRP, will make these adjustments for you.)  
  The second example is for a router that has a BRI port and therefore does not need a TA device. The goal is to provide connectivity to a set of centrally located servers on an as-needed basis. The access needed from this router to the servers is sporadic, so the ISDN solution is implemented because a leased line connection is not cost-justified. This setup is illustrated in Fig. 6-25, along with pertinent extracts from the router configurations.  
   
  Figure 6-25: A dial-on-demand routing solution using a router BRI interface  
  In this application, the ISDN link is made only when there is traffic that the DDR interface on router 1 deems "interesting." We have to tell the router what traffic is worth establishing a link for. For example, you probably will not want the ISDN link to be brought up every time a router wants to send a regular broadcast IGRP update. These types of links are usually set up with static routes to reduce ISDN utilization. You would therefore not enable a routing protocol process on router 1. The process to define a link can be summarized as follows:  
    Configure global ISDN parameters.  
    Identify the packet types that will initiate an ISDN connection.  
    Define and configure the interface over which these identified packets will be sent out.  
    Set the call parameters to be used for establishing the DDR link, such as the number to call.  
  The configurations illustrated in Fig. 6-25 are similar enough for the remote and central router BRI ports that we will explain only the remote router BRI port configuration in detail.  
  Global command isdn switch-type basic-dms100 defines the switch type used by the ISDN provider company.  
  Global command dialer-list 5 protocol ip permit is paired with the dialer-group 5 interface command. The dialer-list command here states that IP packets destined for, or routed through, the interface tagged as belonging to dialer-group 5 are "interesting" and will trigger a DDR connection. More complex permissions can be defined by the use of access lists. In order to use an access list, one must be defined in global configuration. If access list 101 is created, this access list can then be associated with a dial list with the following dialer-list 5 list 101. Appropriately specifying access list 101 will allow you to define specific IP addresses or protocols (such as Telnet or FTP) that may or may not cause a DDR connection to be made.  
  The commands encapsulation ppp, ip address, and ip route are as explained elsewhere.  
  Command dialer-group 5 identifies the BRI 0 interface as belonging to this group, essentially marking BRI 0 as the port that will respond to the interesting packets defined by dialer-list 5.  
  Command isdn spid1 0987654321 sets the SPID number for this ISDN connection as supplied by the ISDN connection provider.  
  Command dialer idle-timeout 200 specifies that if there is no traffic on the connection for 200 seconds, the link will be dropped.  
  Command dialer map ip 164.3.8.2 name router2 1234567 maps the telephone number 1234567 to the IP address 164.8.3.2. If router 1 wants to send a packet to the 164.3.6.0 subnet, its routing table will identify 164.3.8.2 as the address to which to forward the packet, setting off a DDR connection and causing router 1 to dial 1234567. This command can be modified if you know that the number you are dialing is a 56 kbps rather than 64 kbps ISDN connection. In this case, the command would be dialer map ip 164.3.8.2 name router2 speed 56 1234567. If we want to make router 2 a receive-only connection, we can input this command with no phone number argument. It is required to give the hostname of the router being called in this command, which must be the same as that defined in the username command (in this case, router 2).  
  Command ppp authentication chap identifies that CHAP authentication will be enforced prior to the granting of a connection.  
  Command username router2 password aaa generically identifies a username and password pair that is authenticated to connect to this router. If a workstation were connecting, it would have to use "router 2" and "aaa" to successfully negotiate CHAP. We now, however, have two routers that are connecting together and negotiating authentication. The username that each router will use is its hostname, and the password that it will use is the password set in this command. What all this means is that the username on router 1 must be the same as the hostname of router 2, and that both routers must have the same password set.  
  Command ip route 164.3.6.0 255.255.255.0 164.3.8.2 defines a static route in the routing table that identifies that the BRI interface on router 2 is the next hop for subnet 164.3.6.0.  
  The third example is a user who wants to establish an ISDN connection from home to a central site when a network resource located at a central site is required. This type of requirement is well-suited to the Cisco 1004 configuration illustrated in Fig. 6-23. The 1004 is used in the United States because it has a built-in NT1, whereas the 1003 does not have an NT1 and is used elsewhere in the world; the NT1 in that case is usually supplied by the ISDN provider. The Cisco 1004 is very different from the 1001 discussed earlier. The 1004 has a PCMCIA slot for a memory card that holds the configuration. The 1004 is a router, running the full Cisco IOS, whereas the 1001 is a bridge.  
  The configuration for a 1004 is therefore the same as the remote router in the second example discussed above. A 1004 memory card can be configured at a central location and sent out to a remote site for a nontechnical person to insert in the PCMCIA slot.  
  Configuring ISDN PRI Services.     In this section we will discuss how a PRI could be configured for multiple sites to dial into. A PRI is effectively 23 B channels and one D channel, each with 64 kbps bandwidth, all on one physical link. When setting up a central location with many interfaces available, it is more efficient to set up the interfaces so that a remote site can dial into any interface. This gives you more efficient use of the interfaces and you do not have to dedicate any to particular sites. Working on the theory that not all remote sites are connected at the same time, setting up the PRI to allow any site to dial any interface allows you to set up fewer interfaces than there are remote sites, yet provide service to all locations as they need it.  
  There are two methods for allowing remote sites to dial into any interface in a pool. With the first, the remote site gets its address from the interface it dials into at connection time. In the second method, the PRI is configured as some type of rotary group and has dynamic routing enabled so that all interfaces act as one pool of interfaces. The first option is best suited to single workstations dialing in, typically on asynchronous communications. The reason for this is that a single workstation does not need an IP address when it is operating in a standalone mode, and gets one when it needs one for networking. The second option is better suited for providing central backup for remote LANs. IP addresses will be assigned to the workstations on these LANs prior to establishment of the backup link, so some type of dynamic routing is necessary for the rotary to operate effectively.  
  Now we will examine the second option. There are some additional steps involved in setting up a PRI that are not necessary when setting up a BRI. A PRI is delivered on a T-1 in the United States and Japan, and on an E-1 elsewhere in the world. The settings shown here are for a T-1 circuit. To connect to a PRI, you need a CT1 Network processor Module for the 4x00-series router, or a Multichannel Interface Processor for the 7x00-series router. Let's assume we are using a 4700 router for the commands that follow.  
   
  Figure 6-26: PRI configuration for remote routers to connect to via ISDN  
  Figure 6-26 illustrates a typical configuration for setting up a PRI to act as a rotary pool of interfaces for incoming connections that already have a configured IP address. The figure shows only one remote site (router1), but the configuration presented should support 50 or more remote sites occasionally dialing in. The key difference between the setup for a single dial connection and this multiconnection setup is that the username/password pair for each remote router must be configured on the central 4700. In addition, a dialer map statement must be present in the 4700 dialer1 interface section for each remote router. On subsequent versions of the Cisco IOS, dialer map statements are not necessary, and the router uses Inverse ARP in the same manner as it does in a frame relay network to map IP addresses to router names and telephone numbers. Let's examine each command in this configuration.  
    Command isdn switch-type dms-100 defines the switch type used by the telephone company as a Northern Telecom DMS-100.  
    Command username router1 password aaa defines the valid username and password pairs for all remote routers dialing in.  
    Command controller t1 0 sets the necessary configuration for the T1 controller card in the 4700 slot 0.  
    Command framing esf configures the T-1 controller slot 0 as the Extended Super Frame frame type, which is that used in the United States.  
    Command linecode b8zs defines the line-code type as Binary 8 with Zero Substitute. This is the value required for T-1 PRIs in the United States.  
  Command interface dialer1 is there to avoid having to enter the same thing into the router configuration multiple times. (There is no physical dialer interface.) This command enables you to specify a configuration that can be applied to multiple interfaces. To give a physical interface the configuration that is specified in the interface dialer section (in this case, dialer 1), put a single command in the physical interface to make it a member of the specific rotary group (in this case, dialer rotary group 1).  
  Command ip unnumbered Ethernet0 is covered more fully in Chap. 7, but in this instance it lets the dialer 1 interface "borrow" an IP address from the Ethernet port. The BRI port at the remote end also will have an IP unnumbered configuration. IP unnumbered enables point-to-point connections to route packets between the networks at each end of the link, without dedicating a specific subnet number to the point-to-point link. Without this feature, it would not be possible to pool the central PRI connections. If each PRI channel had its own subnet, it could only directly communicate with other IP devices on that same subnet.  
  The remaining commands of interest are as follows:  
    Command encapsulation ppp sets the encapsulation to PPP.  
    Command no ip route-cache slows the speed of packets switched from a fast to a slow link.  
    Command dialer idle-timeout 300 drops the connection after an idle time of 300 seconds.  
    Command dialer-group 1 identifies the traffic that is to pass over the link as specified in the dialer list command.  
    Command ppp authentication chap was discussed in the previous section on BRI.  
    Command dialer map ip 164.8.4.1 name router1 was discussed in the previous section.  
    Command dialer list 1 protocol ip permit was discussed in the previous section.  
    Command interface serial 0:23 identifies the 24th channel, the D channel, because the channel numbers start at zero. The commands in this section configure all the B channel interface parameters.  
    Command dialer rotary-group 1 applies all the configuration commands specified in the interface dialer 1 section to the B channels.  
  Simple ISDN Monitoring.     There are two basic commands for viewing the status of ISDN BRI connections: the show controller bri and show interface bri commands.  
  Command show controller bri displays status regarding the physical connection of the ISDN D and B channels. The most useful part of this display is that it states for each B and D channel whether the layer 1 (Physical layer) connection is activated. This is useful when performing remote diagnostics. If you can Telnet to the router, this command will tell you if a functioning ISDN line has been connected to the BRI port. The equivalent command for a PRI connection is the show controller t1 command.  
  Command show interface bri0 1 2 shows the connection status of the two B channels. If the numbers 1 and 2 are omitted, this command displays D channel information. The sort of information  
  this command display gives you includes whether the PPP protocols such as LCP and IPCP successfully negotiated, the input and output data rate, the number of packets dropped, and the state of the hold queues.  
  Miscellaneous ISDN.     There are a few ISDN commands that may prove useful in special situations and they are presented here in no particular order.  
  Use command isdn caller 212654xxxx if you want to accept calls only from a specified number. The format shown here is useful if you know that calls may be originating from several extensions in the one location. The calls accepted start with 212654, but may have any value for the last four digits. If you want to accept calls from only one number, you may specify the whole number, also. Care should be taken with this command, because if the ISDN switch to which you are connected does not support this feature, using this command will block all incoming calls.  
  Command isdn not-end-to-end 56 sets the line speed for incoming calls to 56 kbps. This is useful if you know that the originating caller has a 56 kbps connection, but your receiver has a 64 kbps connection. If calls originate at one speed and are delivered at another, problems could occur with data corruption.  
  Command dialer in-band, if applied to an asynchronous interface, means that modem chat scripts will be used. If applied to a synchronous interface, it means that V.25bis commands will be used. Basically, this command configures the interface to use the bearer (B) channel for call setup and teardown on an ISDN connection, rather than the usual D channel.  
  Command dialer load-threshold 191 configures the interface to start using the second B channel when the first has reached, in this case, 75 percent capacity. The figure 191 is 75 percent of 255; the valid numerical range for this command is 1-255.  
  Command dialer string 1234567 is a simpler command to use than the dialer map command if the interface being configured is going to dial only one location. Essentially this command specifies the number that will be dialed whenever a connection is initiated.  
  Command BACKUP LOAD X Y can be used for router setups where a dialer group is not defined and you want to have an ISDN connection established to add bandwidth to a link that is getting congested. The example shown below configures a secondary line to activate (in physical terms this means that the serial interface will raise the DTR signal) once the traffic threshold on the primary interface exceeds 80%. Additionally, when the combined bandwidth utilization of the primary plus secondary falls to below 50% of the primary link's bandwidth, the secondary link will disconnect the ISDN connection.  
  Interface serial 0  
  Backup interface serial 1  
  Backup load 80 50  
  Of course, this assumes that an external ISDN terminal adapter is used to connect the router serial interface to the ISDN line and that the terminal adapter is configured to dial on DTR and has all the necessary information to dial (destination number, switch type, etc).  
Additional WAN Technologies  
  To finish this chapter, we'll discuss some general terms, technologies, and devices used in WAN implementations. First, let's take a closer look at the Cisco serial ports that interface to WAN connections.  
  Cisco Serial Interfaces  
  Cisco serial interfaces are either built in, as with the fixed-configuration routers such as the 2500-series, or installed, as with the Fast Serial Interface Processor (FSIP) card, for modular routers such as the 7500-series. Either way, the interface itself is basically the same device that will support speeds up to T-1 (1.544 Mbps) in the United States, and E-1 (2.048 Mbps) elsewhere. The only restriction here is that the FSIP card has two 4-port modules, with each module capable of supporting four T-1 connections; only three E-1 connections can be supported simultaneously because the aggregate throughput on this card cannot exceed 8 Mbps.  
  The Cisco serial port uses a proprietary 60-pin connector and configures itself according to the cable and equipment that are connected to it. As discussed in Chap. 3, a Cisco serial port will configure itself as DTE or DCE depending on the cable end connected to it. Cisco serial port cables will terminate in EIA-232, EIA-449, X.21, or V.35 connectors. These standards have differing capabilities regarding the distance over which they can transport communications at varying speeds. Table 6.1 shows the distances over which it is safe to deliver data at varying speeds for these interface standards. These are not necessarily hard-and-fast rules; it is possible to use longer cables but if you do so, you start running the risk of frame errors being introduced by interference on the cable.  
  Table 6-1: Serial Communications Distance Limitations  
 
 
  Rate  
  EIA-232 Distance (feet)  
 
  EIA-449, X.21, V.35 Distance (feet)  
 
 
 
  9600  
  50  
 
  1,025  
 
  56,000  
  8.5  
 
  102  
 
  1,536,000  
  N/A  
 
  50  
 
 
 
  EIA-232 cannot transmit a given speed of data at the same rate as the other interface standards because it uses unbalanced signals. The terms balanced and unbalanced are often used in data communications, so we will explain the difference between these two types of communication here.  
  First we need to understand a little of what voltage is. Volts measure what is known as the electrical potential difference between two objects. Measuring a difference requires a common reference point for measurement against in order to make the number meaningful. It's like measuring the height of a mountain with reference to sea level rather than against the next closest mountain. It's the same with volts. They normally are quoted as so many volts above what is known as ground, which is taken as 0 volts, i.e., the electrical potential of the planet Earth.  
  Just as there is nothing inherently dangerous about standing on top of a tall building, there is nothing inherently dangerous about being at a certain voltage level. The danger comes if you fall off the building and come in to contact with the ground at high velocity. The same is true if you are at a given voltage level and come into contact with something at a different voltage level. When two objects at different voltage levels come into contact with each other, they will try to equalize their voltage by having current (amperes) flow between them.  
  In data communications, signals that represent data are measured in terms of a particular voltage level. In unbalanced communications, there is only one wire carrying the data signal, and its voltage level is measured against ground (i.e., the Earth's voltage level). With balanced communications, two wires are used to transmit the signal, and the signal voltage level is measured between these wires. Using two wires for data transmission significantly improves a cable's ability to deal with electrical interference.  
  The idea behind why two wires are better at dealing with electrical interference is simple. If a two-wire system is transferring data and is subject to electrical interference, both wires are similarly affected. Therefore, the interference should make no difference to the circuitry receiving the two-wire transmission, which determines the voltage level, and hence the signal, as the difference in electrical potential between the two wires. With unbalanced single-wire transmission, any electrical interference on the line directly affects the voltage potential of that wire with respect to ground. Therefore, unbalanced data transmission is less able than balanced transmission to deliver uncorrupted signals in the presence of electrical interference.  
  Serial Interface Configuration.     If a serial interface is acting as a DCE and is generating a clock signal with the clockrate command, the normal operation is for the attached DTE device to return this clock signal to the serial port. If the attached DTE device does not return the clock signal to the serial port, you should configure the port with the transmit-clock-internal command.  
  Encoding of binary 1 and 0 is normally done according to the Non-Return to Zero (NRZ) standard. With EIA-232 connections in IBM environments, you may need to change the encoding to Non-Return to Zero Inverted (NRZI) standard as follows.  
  Router1(config)#interface serial 0  
  Router1(config-int)#nrzi-encoding  
  Cisco serial ports use a 16-bit Cyclic Redundancy Check (CRC) Frame Check Sequence (FCS). If a Cisco serial interface is communicating directly with another Cisco serial interface, better performance might be obtained by using a 32-bit CRC, as fewer retransmissions occur with a 32-bit CRC. To enable 32-bit CRCs, perform the following in interface configuration mode:  
  Router1(config-int)#crc32  
  High-Speed Serial Interface.     I have mentioned technologies in this book that make use of speeds higher than T-1 speeds, for example, SMDS or frame relay delivered on a T-3 circuit. To accommodate these higher transmission rates, a modular Cisco router uses a High-Speed Serial Interface (HSSI). The HSSI, just one, is delivered on an HSSI Interface Processor (HIP) card. The HIP card is inserted in any slot on a 7x00-series router and interfaces directly to the router bus architecture, the Cisco eXtended bus, or cxbus.  
  The HSSI is now a recognized standard, known as the EIA612/613 standard and operating at speeds of up to 52 Mbps. This enables the interface to be used for SONET (51.82 Mbps), T-3 (45 Mbps), and E-3 (34 Mbps) services. The HSSI on the HIP is different from other Cisco serial ports in that it uses a 50-pin Centronics connector and requires a special cable to connect to the incoming line's DSU device. In other words, a regular SCSI cable will not do. Once installed and connected, the HSSI can be configured and monitored just as any other serial port. The following example applies an IP address of 193.1.1.1 to an HSSI on a HIP inserted in slot 2 in a 7x00 series router.  
  Router1(config)#interface hssi 2/0  
  Router1(config-int)#ip address 193.1.1.1 255.255.255.0  
  All Cisco serial ports are numbered starting at 0, and as there is only one HSSI on a HIP, it will always be port 0. Any other special encoding necessary, such as framing type or linecode, will be specified by the supplier of the high-speed line.  
  Line Types  
  For more than 25 years, telephone companies have been converting from analog to digital transmission their networks that carry voice transmission. Nearly all of the bulk communications between telephone company switching points is digital. It is only over the last mile or so from a CO to a home or small business that the communication is analog.  
  You may have wondered why so much of data communications is based upon 64 kbps circuits or multiples thereof. The reason is that, in digital terms, it used to take 64 kbps to cleanly transmit a voice signal. This single voice channel is referred to as a DS0. Data communications came along after the digitization of voice traffic, and therefore "piggybacked" on the voice technology in place. Let's take a brief look at the basic unit of digital transmission lines, the 64 kbps circuit.  
  Dataphone Digital Service (DDS).     Dataphone Digital Service, or DDS as it is commonly referred to, actually gives you only 56 kbps throughput. The additional 8 kbps is not available for data transfer and is used to ensure synchronization between the two ends of the DDS circuit. This is a function of the Alternate Mark Inversion (AMI) data encoding technique. DDS is essentially supplied over the same pairs of copper wires used for regular analog telephone connections, and two pairs, for a total of four wires, are needed for the service.  
  In the United States, the telephone company will install the line up to what is known as the "demarc," a point of demarcation with respect to troubleshooting responsibility. The demarc is a RJ-48 connector, to which you must connect a CSU/DSU device. In Europe and other parts of the world, the CSU/DSU is typically supplied with the circuit. The CSU/DSU is really two devices in one; the Channel Service Unit (CSU) interfaces to the telephone company's network for DDS and T-1 services, whereas the Data Service Unit (DSU), interfaces to your equipment, in this case a router.  
  In many locations, 64 kbps lines are now available through the use of B8ZS (Bipolar with 8 Zero Substitution) encoding that replaces AMI. This gives you back the full 64 kbps by use of a more intelligent line coding mechanism. This 64 kbps service is known as Clear Channel Capability or Clear 64. All you need to make sure of in your CSU configuration is that it has AMI encoding for 56 kbps services and B8ZS for 64 kbps service.  
  T-1, Fractional T-1, and T-3 Services.     A T-1 is a collection of 24 DS0 circuits, and often is referred to as a DS1. T-1 service may be delivered as either channelized or unchannelized. The unchannelized option is the easiest to understand. In effect, the unchannelized T-1 acts as a 1.536 Mbps pipe that connects to one serial interface.  
  In its channelized form, the T-1 can be used to deliver 24 time slots, each having 64 kbps throughput, or the PRI service previously discussed in the section on ISDN. The channelized T-1 has two typical applications. The first is to supply 24 DS0 connections over one physical link. A T-1 configured this way can be directly connected to one of the two ports on a MIP card inserted in a 7x00-series router, or the one available port on the CT1 card inserted in a 4700-series router. Once connected, the T-1 can be addressed as 24 separate serial interfaces. The telephone company can "groom" each of the separate DS0 channels to any location on its network that is serviced by an individual DS0 circuit. This is done via a device termed the Digital Access Cross Connect (DACC).  
  Using a T-1 in this way simplifies central site equipment management considerably. Consider a head office that needs to be connected to (conveniently) 23 remote branches. All that is necessary in the head office is one T-1 connected to a MIP or CT1 card; no additional CSU/DSU or cabling requirements are necessary.  
  The following is an example of how to get into configuration mode for the 22nd DS0 channel on a channelized T-1 connected to port 0 in a MIP card that is inserted in slot 1 of a 7x00-series router.  
  Router1(config)#interface serial 1/0:22  
  Router1(config-int)#  
  From here, an IP address can be assigned, encapsulation defined, and any other configuration entered, just as for any other physical serial port.  
  A T-1 also can be used as an efficient way to deliver analog phone services, but to do this, an additional piece of equipment, a channel bank, is necessary to convert the digital T-1 signals to 24 analog telephone lines. This can be useful if you need to configure many centrally located dial-up ports, into which roving users will dial into with analog modems. The Cisco AS-5200 has a built-in channel bank and modems so that simply by connecting a single T-1 connector to it, you can have up to 24 modem calls answered simultaneously. In fact, the AS-5200 is even a little more clever than that. The AS-5200 also has a built-in T-1 multiplexer, giving it hybrid functionality for call answering. This means that if you connect a T-1 configured as a PRI to an AS-5200, the AS-5200 will autodetect if the incoming call is from a digital ISDN or analog caller and will answer the call with the appropriate equipment.  
  As we discussed previously for DS0 channels, a T-1 can use either AMI or B8ZS encoding and either the Extended Super Frame (ESF) or D4 framing format. As long as your T-1 equipment (I'm assuming it's a router) is configured to be the same as that used by the telephone company, you should be okay.  
  In Europe and elsewhere, multiple DS0 services are delivered on an E-1, which comprises 32 DS0 channels, giving 2.048 Mbps throughput. The E-1 uses High Density Bipolar 3 (HDB3) encoding. If you order a 256 kbps or 384 kbps circuit from your carrier, you will get a T-1 installed and be allocated only the appropriate number of DS0 channels needed to give you the desired throughput. This service is known as fractional T-1 and is a good idea. The one constant in data networking is the increased need for bandwidth, so it makes sense to install capacity that is easily upgraded, as it probably will be needed at a later stage anyway.  
  A T-3 or DS3 connection is a collection of 672 DS0 circuits of 64 kbps each, which gives a total throughput of 43,008 kbps. (DS3 is the term used for this speed communication over any medium, whereas T-3 is specific to transmission over copper wires.) The actual circuit speed is somewhat faster than that, but some effective bandwidth is lost to synchronization traffic. An HSSI is the only interface that a Cisco router can use to connect to a T-3.  
  The hierarchy of these circuits just discussed is as follows: a single DS3 comprises seven DS2 channels, which break out to 28 DS1 channels, generating a total of 672 DS0 channels.  
  All this potential for faster and faster throughput does not always materialize. Imagine that a company needs to perform a mission-critical file transfer across the Atlantic five times a day. The file transfer uses TCP as the layer 4 protocol to guarantee correct sequencing of packets and to guarantee delivery. The file that is being transferred is getting bigger and taking longer to transfer, so the company is prepared to spend money to upgrade the link to speed up the file transfer. The transatlantic link is currently running at 384 kbps, and the plan is to upgrade the link to 512  kbps. Will this speed up the transfer? It depends.  
  TCP works on the basis of requiring from the receiving device an acknowledgment confirming that the packets sent have arrived safely before it will send more packets. The number of packets TCP will send before it stops and waits for an acknowledgment is defined by the Window size.  
  Let's say the Window size is set initially at 7500 bytes. Now, if you measure the round-trip delay across the Atlantic, it normally will come to around 160 milliseconds (0.16 seconds). So we have to ask ourselves, "How long does it take to clock 7500 bytes onto the link?" If it takes less than 160 milliseconds, the sending device stops transmitting and waits for an acknowledgment from the receiver before it will send any more packets. If this occurs, clearly the full bandwidth is not being utilized. So let's work out what will happen.  
  Multiplying 7500 by 8 bits per byte yields 60,000 bits. A 384,000 bps link will take 0.156 seconds (60,000/384,000) to clock this number of bits onto the line. You can see that if the round-trip time is 0.16 seconds, the transmitter already will have been waiting for 0.04 second before it transfers any more data. Increasing the speed of the link to anything above 384 kbps means that the sending device will just spend more time idle, waiting for acknowledgments, and the speed of the file transfer will not improve.  
  The only way to improve the effective throughput if a higher-speed line is installed is to increase the Window size. If this value is changed on one machine, it must be changed on all other machines with which the machine communicates which might be something of an implementation challenge.
Summary  
  This chapter examined the underlying technological operation of popular WAN protocols and discussed how to configure these protocols on  
  Cisco router interfaces. The public data network technologies examined were frame relay, SMDS, and X.25. The point-to-point protocols illustrated were SLIP, asynchronous and synchronous PPP, SDLC, and ISDN. The chapter concluded with a brief discussion of additional WAN technologies, such as Cisco serial interfaces, and the different digital line types that are commonly available.  

 


 
 


Cisco TCP/IP Routing Professional Reference
Cisco TCP/IP Routing Professional Reference
ISBN: 0072125578
EAN: 2147483647
Year: 2005
Pages: 11

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net