CISCO NETWORK DESIGNS


Designing networks is largely a matter of making choices. Most of the choices have to do with selecting the right technologies and products for the job. Even design elements over which you have no control may still leave choices to make. For example, if the company's art department uses AppleTalk and has no intention of changing, you must decide whether to run it over multiprotocol links or break it off into one or more AppleTalk-only LAN segments.

Once the present and future needs of the enterprise have been researched and documented, the next step is to choose technologies for various functional areas:

  • Backbone technology selection A variety of backbone LAN technologies exist, chosen mostly based on the size of the internetwork and its traffic characteristics.

  • Protocol selection It's assumed here that IP is the network protocol, but choices remain as to which routing and other network control protocols to use.

  • Access technology selection A mix of hubs and switches is usually configured to best fit the needs of a workgroup or even a particular host.

After the underlying technologies are chosen, specific products must be configured to run them. After that step, more design work must be done to implement the configuration. For example, an IP addressing model must be configured, a name services subsystem must be set up, routing metrics must be tuned, security parameters must be set, and so on.

Internetwork design takes place at two levels: the campus and the enterprise. Campus designs cover the enterprise's main local network, from the desktop up to the highspeed backbone to the outside. The enterprise level encompasses multiple campus networks and focuses on WAN configurations-whether a private leased-line WAN or an Internet-based system tunneled through the Internet.

Logical Network Design

An internetwork design is defined by both a physical and a logical configuration. The physical part deals with topology layout, hardware devices, networking software, transmission media, and other pieces. Logical configuration must closely match the physical design in three areas:

  • IP addressing A plan to allocate addresses in a rational way that can conserve address space and accommodate growth

  • Name services A plan to allow hosts and domains to be addressed by symbolic names instead of dotted-decimal IP addresses

  • Protocol selection Choosing which protocols to use, especially routing protocols

Internetwork design should always start with the access layer, because higher-level needs cannot be addressed until the device and user population are known. For example, estimating capacity is virtually impossible until all hosts, applications, and LAN segments have been identified and quantified, and most of these elements reside in the access layer.

From a practical standpoint, the three logical design elements of addressing, naming, and routing are good first steps in nailing down how the physical hardware should be laid out. Each of the three requires forethought and planning.

IP Addressing Strategies

The number of available addresses is called address space. Enterprises use various addressing schemes to maximize address space within the block of IP addresses they had assigned to them by their ISPs. Various addressing strategies have been devised, not only to maximize address space, but also to enhance security and manageability.

Private IP Address Blocks An enterprise receives its public IP address from the Internet Assigned Numbers Authority (IANA). The IANA usually only assigns public addresses to ISPs and large enterprises, and then as a range of numbers, not as a single IP address. In actual practice, the majority of enterprises receive their public IP addresses from their ISP. When designing IP, the IETF reserved three IP address ranges for use as private addresses:

  • 10.0.0.0 through 10.255.255.255

  • 172.16.0.0 through 172.31.255.255

  • 192.168.0.0 through 192.168.255.255

These three IP address blocks were reserved to avoid confusion. You may use addresses within any of these reserved blocks without fear of one of your routers being confused when it encounters the same address from the outside, because these are private addresses that never appear on the Internet.

Private IP addresses are assigned by the network team to internal devices. Because they'll never be used outside the autonomous system, private addresses can be assigned at will as long as they stay within the assigned range. No clearance from the IETF or any other coordinating body is required to use private addresses, which are used for these reasons:

  • Address space conservation Few enterprises are assigned a sufficient number of public IP addresses to accommodate all nodes (hosts and devices) within their internetwork.

  • Security Private addresses are translated through PAT or NAT to the outside. Not knowing the private address makes it tougher for hackers to crack into an autonomous system by pretending to be an internal node.

  • Flexibility An enterprise can change ISPs without having to change any of the private addresses. Usually, only the addresses of the routers or firewalls performing address translation need to be changed.

  • Smaller routing tables Having most enterprises advertise just one or perhaps a few IP addresses helps minimize the size of routing tables in Internet routers, thereby enhancing performance.

This last item perhaps explains why the IETF settled on a 32-bit IP address instead of a 64-bit design. Doling out infinitely greater address space would discourage the use of private addresses. The use of global IP addresses would be rampant, engorging routing tables in the process. This would create the need for routers to have faster CPUs and lots more memory. Back when IP was designed, during the 1970s, network devices were in their infancy and were quite slow and underconfigured by today's standards.

Obtaining Public IP Addresses Registered IP addresses must be purchased from the nonprofit IANA, which is responsible for ensuring that no two enterprises are assigned duplicate IP addresses. But few enterprises obtain their IP addresses directly from the IANA; most get them indirectly through their ISP.

image from book

For example, Tier 1 ISPs, such as UUNET or Sprint, secure large blocks of IP addresses from the IANA. They, in turn, dole them out to Tier 2 ISPs (there are probably dozens in your town alone), who, in turn, assign them to end-user enterprises. Most large companies deal directly with Tier 1 ISPs. IP addresses are doled out in blocks. The bigger your enterprise, the larger the range of IP addresses you should obtain.

Dynamic Addressing Dynamic addressing is a technique whereby end-system hosts are assigned IP addresses at login. Novell NetWare and AppleTalk have had built-in dynamic addressing capabilities from the beginning. That's not the case with IP, however. Remember, desktop protocols such as NetWare IPX were designed with client-server model in mind, while IP was originally designed to connect a worldwide system: the Internet. IP dynamic addressing came to the fore only in the mid-1980s to accommodate diskless workstations that had nowhere to store permanent IP addresses.

A couple of earlier dynamic IP address assignment protocols led to the development of the Dynamic Host Configuration Protocol (DHCP). DHCP, now the de facto standard, uses a client-server model in which a server keeps a running list of available addresses and assigns them as requested. DHCP can also be used as a configuration tool. It supports automatic permanent allocation of IP addresses to a new host and is even used for manual address assignments as a way to communicate the new address to the client host. Figure 14-7 depicts DHCP's processes.

image from book
Figure 14-7: DHCP can dynamically assign IP addresses to end-system hosts

Dynamic allocation is popular because it's easy to configure and conserves address space. DHCP works by allocating addresses for a period of time called a lease, guaranteeing not to allocate the IP address to another host as long as it's out on lease. When the host logs off the network, the DHCP server is notified and restores the address to its available pool.

To assure service, multiple DHCP servers are often configured. When the host logs in, it sends a DHCP discover message across the network to the server specified by the DHCP identifier field of the message. The DHCP server responds with a DHCP offer message or passes the discover request to a backup server. The client accepts the offer by sending the server a DHCP ACK message as acknowledgment.

If the offer is accepted, the server locks the lease in the available address pool by holding the assignment in persistent memory until the lease is terminated. Lease termination occurs when the client logs off the network (accomplished usually by the user turning off his or her PC at day's end). If the identified DHCP server is down or refuses the request, after a preset timeout period, the client can be configured to send a discover request to a backup DHCP server. If the server isn't on the same subnet, a router can be configured as a DHCP relay agent to steer the request message to the LAN segment on which the server resides.

Domain Name System

As discussed in our review of internetworking fundamentals in Chapter 2, people almost always reach network nodes by name, not by address. Think about it-how many times have you typed a dotted-decimal IP address into the address field in your browser? In most cases, you type a URL instead or simply click one sitting beneath a hypertext link on a Web page.

The service used to map names on the Internet is called the Domain Name System (DNS). A DNS name has two parts: host name and domain name. Taking toby. velte. com as an example, toby is the host (in this case, a person's PC), and http://www.velte.com is the domain. Up until 2000, domain names had to be registered with InterNIC (which stands for Internet Network Information Center), a U.S. government agency. However, that responsibility was transferred to several private companies, taking the government out of the URL business.

The IETF has specified that domain name suffixes be assigned based on the type of organization the autonomous system is, as listed in Table 14-2.

Table 14-2: IETF Domain Name Suffixes

Domain

Autonomous System Type

.com

Commercial company

.edu

Educational institution

.gov

Governmental agency

.org

Nonprofit organization

.net

Network provider

.biz

Restricted to businesses

There are also geographical top-level domains defined by country-for example, .fr for France, .ca for Canada, .de for Germany (as in Deutschland), and so on. For domain names to work, they must, at some point, be mapped to IP addresses so that routers can recognize them. This mapping is called name resolution-a task performed by name servers. Domain Name Systems distribute databases across many servers in order to satisfy resolution requests. A large enterprise would distribute its DNS database throughout its internetwork topology. People can click their way around the Internet because their DNS databases are distributed worldwide. Figure 14-8 depicts the name services process.

image from book
Figure 14-8: Domain names must be resolved by a name server

When a client needs to send a packet, it must map the destination's symbolic name to its IP address. The client must have what's called resolver software configured in order to do this. The client's resolver software sends a query to a local DNS server, receives the resolution back, writes the IP address into the packet's header, and transmits. The nameto-IP mapping is then cached in the client for a preset period of time. As long as the client has the mapping for a name in cache, it bypasses the query process altogether.

Name Server Configuration Many internetworks have multiple DNS servers for speed and redundancy, especially larger autonomous systems on which hosts frequently come and go. Usually, name services are handled from the central server within the internetwork. For example, Windows 2000/2003 networks have so-called domain controller (DC) servers, which are responsible for various housekeeping duties, including logon requests.

Besides DNS, the other two major naming services are the Windows Internet Name Service (WINS) and Sun Microsystems' Network Information Service (NIS). While DNS is optimized for Internet mappings, WINS and NIS manage name services at the internetwork level. WINS servers use DHCP to field requests, because DNS doesn't lend itself to handling dynamic names. (It wants them stored permanently.) NIS performs a similar duty among UNIX hosts.

DNS is an important standard. You're able to click between hosts throughout the world because there are hundreds of thousands of DNS servers across the globe, exchanging and caching name mappings across routing domains so that it takes you a minimum of time to connect to a new Web site.

Campus Network Designs

The term campus network is a bit of a misnomer. What's meant is any local internetwork with a high-speed backbone. For example, the local network of a company's headquarters located entirely in a skyscraper is an example of a campus network. The term has had such heavy use in computer marketing that it's stuck as the term for medium-tolarge local networks. Whatever it's called, several models have been developed for how to configure campus networks. We'll review them here.

The Switch-Router Configuration

The so-called switch-router configuration covers the access and distribution layers of the three-layer hierarchical network design model. The two layers are considered together because the distribution routers are generally located in the same building as host devices being given network access.

Access-Layer Configuration The access layer of the three-layer hierarchical model is largely a function of the so-called switch-router configuration. The major exception to this is remote access, covered later in this chapter (see the section titled "Connecting Remote Sites"). Configuring the access layer is largely a matter of wiring together hosts in a department or floor of a building. As far as physical media, we'll assume that Category 6 unshielded twisted pair (UTP) cable is used to wire hosts into access devices.

An important consideration, when configuring the access and distribution layers, is what type of network you'll be deploying. Most access-layer LAN segments being designed today run over Fast Ethernet specification, the 100-Mbps variant of the Ethernet standard.

These decisions dictate what Cisco products to configure and, to some extent, how to lay out your topology.

Selecting Access-Layer Technology Nowadays, if you have a choice, it's pretty much a given that you'll use Fast Ethernet or even Gigabit Ethernet for the access layer. It's fast, cheap, and the talent pool of network administrators knows this LAN specification best. If you have 10-Mbps Ethernet, for instance, your choices are somewhat more limited. You also must be careful that any existing cable plant meets the physical requirements specified by the LAN technology.

Exactly how you lay out the access layer is a little more complicated. If you've gathered the needs-analysis information discussed earlier, that data will go a long way toward telling you two important things:

  • Workgroup hierarchy Large homogenous workgroups lend themselves to flat switched networks. For example, large customer service departments or help desks tend to connect to a fairly consistent set of hosts to run a limited set of applications. These shops are great candidates for flat (non-VLAN) switched networks.

  • Traffic loads If traffic volumes will be heavy and QoS policies stringent, you might want to look at a VLAN switched network, or at least high-bandwidth routed network configurations.

As you answer these two questions for various areas across the topology, the configuration begins to take shape. Quite often, this process is iterated floor by floor and building by building. This shouldn't surprise you. After all, networking isn't the only field of endeavor that is geographical in nature. So is operations management; it usually makes sense for managers to group certain types of workers and/or certain types of work tasks into one physical location.

Physical Layout The classical access-layer topology is the data closet-MDF layout. A data closet (also called a wiring closet or phone closet) is a small room housing patch panels connecting hosts to hubs or access switch ports. The patch panel is where networks start. A patch panel is a passive device with rows of RJ-45 jacks similar to the RJ-11 jacks for telephones. The host device's unshielded twisted pair (UTP) cable plugs into one jack, and a cable from another jack plugs into the switch port. This modular arrangement gives flexibility in moving devices between ports.

image from book

Signals go through the switch by going out its uplink port to connect to the building's riser. Riser refers to the bundle of individual cables running from each floor down to a termination point. An uplink connects the switch "up" in the logical sense, in that the riser is headed toward a larger piece of equipment-usually a router or a LAN switch.

MDF stands for main distribution facility-usually a room in a secure location on the building's first floor. The MDF serves as the termination point for the wiring emanating from the data closets, often equipment for both voice and data. The trend has been to locate the MDF in the enterprise's computer room, if there's one in the building. Depending on the building's setup, the backbone travels either through holes punched through the floors or through the elevator shaft.

A riser's medium is almost always fiber-optic cable in larger buildings. The main reason for using fiber is that it can carry data more than 100 meters and is unaffected by electrical noise in buildings.

image from book

The Switch Configuration Various rules of thumb are applied when configuring the access layer. UTP cable can span up to 100 meters from the data closet. This is almost always more than enough on the horizontal plane. (Few work areas are wider than a football field is long.) If the data closet is located at the center of a floor, the effective span would be 200 meters. As shown in Figure 14-9, not all buildings are vertical. Many are large horizontal structures of one or two floors, such as manufacturing plants and warehouses. For very large floors, the practice is to place data closets on either side.

image from book
Figure 14-9: The classical router-switch configuration employs smaller switches connected to a large switch, which is then connected to a router

From a logical standpoint, it doesn't make sense to place a router on every floor. Doing so would be prohibitively expensive. The strategy, then, is to minimize the number of router interfaces servicing a given number of hosts. Switches fulfill this. Figure 14-9 shows a configuration for a medium-sized company holding a few hundred employees in a building. To connect users on each floor, at least one Cisco Catalyst 3750 is placed in each data closet. Catalyst 3750 models connect 12, 24, or 48 hosts per unit and are stackable up to nine units per stack. The size of the stack depends on the number of employees on the floor. Which particular Catalyst 3750 model you use depends on the population density: some models support 12 ports for a stack density of 108 ports, and others support 48 ports for a density of 432 ports. If the population goes beyond 432, simply put another stack in the data closet to increase the number of ports.

The bottom-left area of Figure 14-9 shows how a riser is not a backbone in the proper sense of the term. The uplink running out of the Catalyst 3750 on the ground floor expands the riser bundle to a total of five fiber cables, which are essentially long wires used to avoid having to put a terminating device on each floor. The backbone is defined logically, so you can think of riser cables as "feeder wires" instead of as a backbone.

The LAN technology throughout the example building is Fast Ethernet, which runs at 100 Mbps. That speed is plenty for connecting most individual host devices. In practical terms, this means our riser is heading into MDF with 500 Mbps raw bandwidth, so a fast device is needed to handle the connections. We've configured a Cisco Catalyst 6506 switch for the job. With a 32-Gbps backplane, the Catalyst 6506 has plenty of horsepower to maintain satisfactory throughput for traffic from the five LAN segments. This box has six module slots, but a single 12-port 100FX card will handle all five LAN segments, leaving plenty of room for growth. A second slot is used to connect to the outside, leaving three open slots.

Figure 14-9 draws out the inherent advantages of LAN switching. You'll remember from Chapter 5 that a switch is roughly ten times quicker when it has a MAC address in its switching table. Our example company has 300 employees, and the Catalyst 6506 has more than adequate memory and backplane speed to handle a switching table of that size. (It can handle thousands.) Because the switch is talking to all 300 hosts, it has their MAC addresses readily available.

The Access Switch Configuration We've mentioned that hubs have given way to switches. What we're talking about here is access switching, as opposed to the LAN switching example in Figure 14-9. An access switch connects hosts to a LAN segment. This extends switched bandwidth all the way out to the desktop or server. Figure 14-10 shows a typical access switch configuration.

image from book
Figure 14-10: Access switches replace hub ports to connect bandwidth-hungry hosts

It wouldn't be practical to run a fiber-optic cable all the way down to the MDF for every switched host. As Figure 14-10 shows, an interim step can be configured using an access switch such as the Cisco Catalyst 2820, able to connect up to 24 devices. A lowerend switch isn't used here because it doesn't have an FX port for connecting to a fiberoptic riser.

It should be pointed out that in high-density environments, users are faced with either configuring high-end Catalyst switches in the data closet or running riser cables to the MDF.

The Switch-Router Configuration To be able to internetwork, users need to be routed at some point. The standard practice is to configure a local router in the MDF room. That way, users inside the enterprise are connected to the enterprise internetwork for intramural communications, as well as to the firewall to access the Internet.

Figure 14-11 zooms in on our example company's MDF room. A Cisco 3845 router is configured in this situation because it has eight module slots that can accommodate LAN or WAN modules. One slot is filled with a one-port 100BaseTX LAN module to connect the Catalyst 6500 switch; the other houses a one-port T1 WAN module connecting the building to the outside world.

image from book
Figure 14-11: Switched networks need routers to talk to the outside world

If you're thinking that with all the bandwidth floating around the building, a mere 1.544-Mbps pipe to the outside might not provide sufficient capacity, you're catching on. A T1 link indeed might not be enough, depending on how much of the local traffic load flows to the outside.

The New 80/20 Rule Remember the 80/20 rule discussed earlier in the chapter (see the section "The Three-Layer Hierarchical Design Model")? The traditional dictum has been that only 20 percent of the traffic goes to the outside. But things have changed. Now the gurus are talking about the "new 80/20 rule," also known as the 20/80 rule, where as much as 80 percent of traffic can go to the outside as users reach into the Internet to download files, talk to other parts of the enterprise intranet, or even deal with trading partners through an extranet.

The single biggest driver turning the 80/20 rule on its head is e-commerce, where networked computers are taking over traditionally human-based sales transactions. Web sites such as http://www.Amazon.com and E*TRADE are famous for cutting out the middleman, but electronic business-to-business trading-called electronic data interchange (EDI)-is generating more IP traffic with each passing day.

Assuming the new 80/20 rule holds for our example enterprise, the MDF room might be configured along the lines of Figure 14-12, where a much fatter pipe is extended to the outside in the form of a T3 line-a 43-Mbps, leased-line, digital WAN link medium.

image from book
Figure 14-12: Heavy Internet use is driving enterprises to install bigger edge routers

Now the router is bigger and the switch is smaller. If the users are talking to the outside 80 percent of the time, there's less need to switch traffic within the building. We've configured a Cisco Catalyst 2900 switch instead of the Catalyst 6500 because there's less LAN switching work to do. The high-end Cisco 7206 router is configured for greater throughput capacity, with a faster processor and a six-slot chassis.

At 45 Mbps, T3 runs nearly 30 times faster than a T1 line. More and more enterprises are turning to T3 to make the point-to-point connection to their ISPs. Few, however, need all that capacity, so most ISPs resell a portion of the bandwidth to individual customers according to their needs, a practice called fractionalizing, in which the customer signs up for only a fraction of the link's capacity.

Choosing a High-Speed Backbone

Backbones are used to connect major peer network nodes. A backbone link connects two particular nodes, but the term backbone often is used to refer to a series of backbone links. For example, a campus backbone might extend over several links.

image from book

Backbone links move data between backbone devices only. They don't handle traffic between LAN segments within a site. That's done at the distribution layer of the three-layer hierarchical model by LAN switches and routers. Backbones concentrate on moving traffic at very high speeds over land.

Campus backbones obviously cover a short distance, usually through underground fiber-optic cabling. WAN backbones-used by big enterprises and ISPs-move traffic between cities. Most WAN backbone links are operated by so-called Internet backbone providers, although many large enterprises operate their own high-speed long-distance links. WAN links run over high-speed fiber-optic cable links strung underground, on electrical pylons, and even under oceans. Satellite links are also becoming common. Regardless of transport medium and whether it's a campus or WAN backbone, they share the following characteristics:

  • Minimal packet manipulation Such processing as access control list enforcement and firewall filtering are kept out of the backbone to speed throughput. For this reason, most backbone links are switched, not routed.

  • High-speed devices A relatively slow device like a Cisco 4500 would not be configured onto a high-speed backbone. The two ends of a backbone link generally operate over a Catalyst 6500 link or faster.

  • Fast transport Most high-speed backbones are built atop transport technology of 1 Gbps or higher.

The two main backbone technologies now are ATM and Gigabit Ethernet. FDDI is widely installed, but with a total capacity of only 100 Mbps, few new FDDI installations are going into high-speed backbones.

ATM Backbones Asynchronous Transfer Mode (ATM) uses a fixed-length format instead of the variable-length packets Ethernet uses. The fixed-length format lends itself to high-speed throughput, because the hardware always knows exactly where each cell begins. For this reason, ATM has a positive ratio between payload and network control overhead traffic. This architecture also lends itself to QoS-a big plus for operating critical backbone links.

Figure 14-13 shows a campus backbone built over ATM. The configuration uses Catalyst 6509 LAN switches for the outlying building and a high-end Catalyst 8500 Multiservice switch router to handle traffic hitting the enterprise's central server farm.

image from book
Figure 14-13: An ATM campus backbone can connect central resources

A blade is an industry term for a large printed circuit board that is basically an entire networking device on a single module. Blades plug into chassis slots. For a Catalyst switch to talk in ATM, the appropriate adapter blade must be configured into the chassis. The LightStream 1010 blade is used here because it's designed for short-haul traffic-there are other "edge" ATM blades for WAN traffic. One reason for this is to allow Cisco to support different technologies in a single product.

Cisco ATM devices use the LAN emulation (LANE) adapter technology to integrate with campus Ethernet networks.

Switched WAN backbones run over very high-speed fiber-optic trunks running the SONET specification. Most new trunks being pulled are OC-48, which run at about 2.5 Gbps. OC stands for Optical Carrier, and SONET stands for Synchronous Optical Network. This is a standard developed by Bell Communications Research for very highspeed networks over fiber-optic cable. The slowest SONET specification, OC-1, runs at 52 Mbps-about the same speed as T3. OC SONET is an important technology, because it represents the higher-speed infrastructure "pipe" the Internet needs to continue expanding. We mention this here, because ATM and Gigabit Ethernet R&D efforts are carried out with the SONET specification in mind, and it's the presumed WAN link transport.

Gigabit Ethernet Backbone Although Gigabit Ethernet is a much newer technology than ATM, many network managers are turning to it for their backbone needs instead of ATM. Figure 14-14 shows that a Gigabit Ethernet backbone can be configured using the same Catalyst platforms as for ATM. This is done by configuring Gigabit Ethernet blades instead of ATM blades. Note, also, that the same fiber-optic cabling can be used for Gigabit Ethernet, but the adapters must be changed to those designed to support Gigabit Ethernet instead of ATM.

image from book
Figure 14-14: Gigabit Ethernet can also be run in high-end Catalyst switches

As you might imagine, the technology-specific blades and adapters represent the different electronics needed to process either variable-length Ethernet packets or fixedlength ATM cells.

Connecting Remote Sites

There are two kinds of remote locations: the branch office and the small office/home office (SOHO). The defining difference between the two is the type of connection. Because they have only one or two users online at any given moment, SOHO sites use dial-in connections, while branch sites use some form of a dedicated circuit. Three major remote connection technologies are configured here. You might want to flip back to Chapter 2 to review how these respective technologies work.

Frame Relay Frame Relay is ideal for "bursty" WAN traffic. In other words, dedicated leased lines such as T1 or T3 only make economic sense if they're continually used. Frame Relay solves that problem by letting users share WAN infrastructure with other enterprises. Frame Relay can do this because it's a packet-switched data network (PSDN) in which end-to-end connections are virtual. You only need to buy a local phone circuit between your remote site and a nearby Frame Relay drop point. After that point, your packets intermix with those from hundreds of other enterprises.

Normally, a device called a FRAD is needed to talk to a Frame Relay network. FRAD stands for Frame Relay Assembler/Disassembler, which parses data streams into the proper Frame Relay packet format. But using a mere FRAD only gets you connected and offers little in the way of remote management, security, and QoS. Cisco has built Frame Relay capability into many of its routers to provide more intelligence over Frame Relay connections. Figure 14-15 shows a typical Frame Relay configuration using Cisco gear.

image from book
Figure 14-15: Frame Relay-capable routers are superior to FRADs for managing links

Because Frame Relay uses normal serial line connections, no special interfaces need be installed in a router to make it Frame Relay–compatible. The Cisco 2600 router is a cost-effective solution for the stores in the example in Figure 14-15, because they have sufficient throughput capacity to handle the traffic loads these remote locations are likely to generate.

Integrated Services Digital Network Integrated Services Digital Network (ISDN) can be used for either dial-in or dedicated remote connections. It provides much more bandwidth than normal analog modem telephone connections, but must be available from a local carrier to be used on your premises. ISDN has channel options called BRI and PRI. BRI has two so-called B-channels to deliver 128-Kbps bandwidth, and is usually used for dial-in connections from home or small offices, or for backup in case the main connection fails. PRI packages 23 B-channels, for about 1.48-Mbps bandwidth, and is generally used for full-time multiuser connections. This is commonly referred to as a T1.

Figure 14-16 shows a typical Cisco ISDN configuration. The Cisco 800 Series routers are targeted to connect ISDN users. The 836 has four ports, and the 801 has one port. Since the 836 is able to handle VPN needs, it is also deployed as the router in the lowerright area of Figure 14-16. In this case, the router is able to form a secure tunnel through the Internet.

image from book
Figure 14-16: ISDN supports both dial-in and dedicated citcuit connections

Digital Subscriber Line Digital Subscriber Line (DSL) is competing with ISDN for the small office/home office market. To use DSL, you must be serviced by a local telephone switch office that supports DSL and be within a certain distance of it-usually a few miles.

There are different types of DSL, including:

  • ADSL Characterized by asymmetrical data rates, where more data comes down from the phone company than the user can send back up. This means that ADSL should be selectively used where traffic characteristics match this constraint-in other words, where the user does a lot of downloading but not a lot of uploading. This is the case with most Internet users, though, and ADSL has become quite popular where the phone companies offer it.

  • SDSL Characterized by symmetrical data rates, where the same amount of data goes either way. This is usually a more expensive solution than ADSL.

  • IDSL A slower symmetrical solution for locations that are not close enough to the local telephone switch office, it offers a maximum throughput of 144 Kbps in both directions.

The configuration in Figure 14-17 shows a Cisco 837 ADSL (Asymmetric Digital Subscriber Line) router. The Cisco 837 looks like a cable TV decoder, but has an Ethernet interface on the back to connect local users.

image from book
Figure 14-17: The Cisco 827 ADSL router is ideal for DSL connections




Cisco. A Beginner's Guide
Cisco: A Beginners Guide, Fourth Edition
ISBN: 0072263830
EAN: 2147483647
Year: 2006
Pages: 102

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net