Case Study: Analyzing Service Requirements for Acme, Inc.


Acme has decided to look at an enterprise case as a reference that has already migrated from Layer 2 to Layer 3 to analyze the service requirements further.

This case study evaluates MPLS network service providers and the issues faced by an enterprise customer in transitioning from a circuit- and Layer 2-based network to a service provider MPLS network. It is based on the experiences of a large enterprise customer in the aerospace and defense industries.

The purpose of the case study is to provide a template for other enterprise customers facing similar evaluations and to provide insight into implementation issues that came to light during the transition phase.

A secondary goal is to allow service providers to better understand the criteria that enterprise customers typically use when evaluating an MPLS network. DefenseCo Inc. has a large national and international network in place, consisting of more than 300 sites in more than 25 countries. The core of the network is a combination of Layer 2 technologies. Within the U.S. the majority of the network is Switched Multimegabit Data Service (SMDS)-based, with smaller segments based on ATM and Frame Relay.

DefenseCo distributed a request for proposal (RFP) to 15 network service providers. The responses to the RFP were evaluated, and three service providers were selected to be tested during the technical evaluation. DefenseCo's primary goal is to replace the SMDS network, which will expire in 18 months. Secondary goals include moving video from ISDN to the MPLS network, reducing network costs, and improving the network return on investment (ROI).

Layer 2 Description

DefenseCo's current network is composed of Cisco routers, including 2600s, 3600s, 7200s, and 7500s in various configurations. Major hubs within the U.S. are in a full-mesh configuration. SMDS is provided by the current service provider between hub nodes. Branch offices are connected to hubs by Frame Relay in a hub-and-spoke configuration. Certain sections of the network have an ATM backbone, resulting from the integration of acquisitions, and are used for other special requirements. The current network provides data transport between all sites but does not provide voice or video capabilities.

Existing Customer Characteristics That Are Required in the New Network

DefenseCo's backbone is essentially a fully meshed backbone between all major hubs. Because of the nature of SMDS, routing in the backbone can be influenced by configuring paths between hubs. SMDS constructs a Layer 2 network that determines the neighbors at Layer 3. In a number of instances, DefenseCo uses this capability to segment parts of the network or to provide special routing in areas of the network. Layer 3 routing within the DefenseCo backbone is based on Enhanced Interior Gateway Routing Protocol (EIGRP) and uses private addressing. One instance requiring special routing is Internet access. DefenseCo's backbone has three Internet access points in three different regions.

Because of the fully meshed nature of the SMDS backbone, remote sites are always within one hop of the nearest Internet access point. A specific default route in the remote site configuration points to the nearest Internet access point. Internet access points in the DefenseCo backbone translate from private to public addressing and provide firewall protection between the backbone and the public Internet. The same Internet access points are retained with the MPLS backbone. However, in the MPLS network, routing distances change. In some cases, the default route points to the wrong Internet point of presence (PoP) for a remote site. The capabilities to direct traffic to a specific Internet PoP and to provide redundancy in the event of an Internet PoP failure are required in the new MPLS backbone.

Acquisitions also require special routing. DefenseCo has acquired a number of other companies, each of which has its own network and access to the Internet. In a number of cases, the IP addressing of the acquired company overlaps the address space of the DefenseCo corporate backbone. DefenseCo is in the process of integrating these network segments into the corporate backbone. Using SMDS, these network segments retain their private addressing until they are incorporated into the corporate backbone. The same capability for retaining private addressing space until segments can be incorporated into the corporate backbone must be retained.

DefenseCo's Backbone Is a Single Autonomous System

The same single autonomous system (AS) is required in the new MPLS backbone. DefenseCo's backbone has less than 100-ms round-trip delay between any two sites and very minimal jitter due to the SMDS backbone. In the MPLS backbone, both the delay and jitter characteristics must be maintained or improved for the real-time traffic class. For other traffic classes, the maximum delay parameters must be maintained. DefenseCo's requirements are as follows:

  • Access to the Internet at three major points, with a need to maintain ingress and egress through these same points

  • Private addressing, potentially overlapping due to acquisitions

  • Single autonomous system across the MPLS backbone

  • Special routing due to acquisitions

  • Maximum of 100-ms round-trip delay, with minimum jitter for the real-time traffic class

  • Maximum of 100-ms round-trip delay for all other traffic classes

Reasons for Migrating to MPLS

The first priority for DefenseCo was to find a replacement for the SMDS backbone.

The SMDS technology was nearing the end of the product cycle for the service provider, and DefenseCo received notice that the service would be terminated within 18 months. In evaluating replacement technologies, DefenseCo looked at several critical factors. The SMDS backbone economically provided a full-mesh configuration. With full routing updates exchanged between DefenseCo's CE routers and the MPLS service provider backbone, DefenseCo was able to economically retain a fully meshed network. Furthermore, by categorizing traffic into four levels of service, DefenseCo was able to take advantage of the benefits of QoS and at the same time gain maximum benefit by dynamically allocating the total bandwidth. The equivalent solution in an ATM or Frame Relay network would have incurred higher costs for a full mesh of PVCs and might have required multiple PVCs between sites for differing levels of QoS. Bandwidth between sites would have been limited to a static allocation configured for each of the PVCs, thus further reducing network efficiency. Ongoing operational expenses for adding and deleting sites, and keeping bandwidth allocations in balance with traffic demands, would also have increased. The total cost of ownership for the network would have increased significantly. Finally, almost all of DefenseCo's sites are equipped for videoconferencing. In the legacy network, videoconferences were set up using ISDN and a videoconference bridge. DefenseCo plans to move to H.323 videoconferencing using QoS across the MPLS backbone. Although some router upgrades are required, the cost savings of converging video with data on the MPLS backbone, compared to the current videoconferencing costs, will drive a significant ROI benefit from the network.

Evaluation Testing Phase

DefenseCo's goal in evaluating service providers was to understand, in as much detail as possible, how the service provider network would support the projected traffic load and how the network would react under various adverse conditions of failure and congestion. Each of the three service providers on the short list provided a network for evaluation. The test networks were constructed using Los Angeles, New York, and Dallas as hub cities. Each hub had a T1 connection from the service provider edge router to the CE router.

To evaluate network capabilities, DefenseCo profiled the projected traffic mix and then simulated as closely as possible real network traffic conditions. A baseline network performance was established, and then various congestion and failure conditions were simulated to determine deviation from the baseline and time to recovery after service was restored. Details on the packet size and traffic mix are included in the "Evaluation Tools" section later in this chapter.

In evaluating service provider networks, DefenseCo found two models for the MPLS backbone. In the first model, used by most of the MPLS service providers, DefenseCo provided the customer premises equipment (CPE) router that served as the CE router. In this model, DefenseCo installed, configured, and maintained the CE router. DefenseCo also was responsible for the first level of troubleshooting within the network. The service providers called this model the unmanaged service because DefenseCo was responsible for managing the CE router. Sample configurations with location-specific data were provided to DefenseCo to coordinate the configurations between the PE and CE routers. In the second model, the service provider installed and maintained the CE router. DefenseCo's interface to the network was through an Ethernet port. In this model, the service provider was responsible for all network installation, configuration, and troubleshooting. The service providers called this model the managed service because the service providers were responsible for managing both the PE and CE routers. DefenseCo preferred the first model, in which it supplied the CE router. The benefits of lower cost and greater visibility into the network, along with the ability to quickly resolve problems through DefenseCo's own IT staff, were viewed as critical factors. DefenseCo required all the sites within the network to function as a single AS.

Routing information is exchanged between the DefenseCo CE router and the service PE router using Exterior Border Gateway Protocol (eBGP). To allow DefenseCo to use a single AS for all sites, the service provider BGP configuration required the neighbor AS-override command to be enabled. This command allows routers to accept routing updates from EBGP peers that include routes originated in the same AS as the receiving router. Without the neighbor AS-override command in the PE router, these routes would be dropped. This worked well in the model with DefenseCo supplying the CE router. However, Interior Border Gateway Protocol (iBGP) and additional configuration were required in the model where the service provider supplied the CE router. The following sections provide details on the tests DefenseCo performed, an overview of the results, and some insight into DefenseCo's final vendor selection.

The results presented in this section are typical of the actual results DefenseCo encountered in the test networks. The actual test sets were run over an extended period of time and provided a more extensive set of data, including some anomalous results that required further investigation. The test results included in this section are typical for their respective tests and are derived from the actual DefenseCo data. The anomalous results have not been included. The tests were conducted over a period of 60 days. In response time tests, and other similar tests, data was collected for the entire period. Tests run under specific test conditions were run multiple times for each service provider.

Routing Convergence

DefenseCo tested convergence times to determine the expected convergence in the event of a link failure. Convergence tests were performed by examining routing tables in the CE router. One of the three sites in the test network was then disabled by disconnecting the circuit and disabling the port. The routing tables were again examined to ensure that the disabled site had been dropped from the routing table. The site was then reenabled, and the time for the route to appear in the routing table was measured. Routing convergence tests were run under both congested and uncongested conditions. All three service provider networks were found to converge within an acceptable period of time under both congested and uncongested conditions. The differences between the managed and unmanaged service models caused disparities in conducting the convergence tests. In the unmanaged service model, DefenseCo supplied the CE router. The EBGP routing protocol was used to exchange routing information between the CE and PE routers. In the managed service model, the service provider supplied the CE router. The IBGP routing protocol was used to exchange routing information between the DefenseCo network edge router and the CE router managed by the service provider. Under uncongested, best-case conditions, convergence times ranged from less than a minute to slightly less than 2 minutes. Under congested, worst-case conditions, all three service provider networks converged in less than 2 minutes. The results of this test were within acceptable limits for all three service providers. No further comparisons were considered because of the disparity in the service models.

Jitter and Delay

The first step in the delay tests involved a simple ping run between sites with the network in a quiescent state. A packet size of 64 bytes was used to keep packetization and serialization delays to a minimum and to expose any queuing or congestion delays in the service provider networks. In the existing SMDS network, the delay between Los Angeles and New York was on the order of 100 ms round trip (r/t). The delay between New York and Dallas averaged 70 ms r/t, and the delay between Dallas and Los Angeles averaged less than 60 ms r/t. These numbers were used as the maximum baseline permissible in the service provider network. The users of the current network experience these delays in interactive transactions and in batch transactions such as system backups. A small change in delay could significantly affect the time required for backing up a system and extend the backup beyond the specified window.

After establishing the minimum baseline response time, DefenseCo started a more complex set of tests that required the Cisco IP SLA to generate traffic streams that simulate VoIP and voice and video over IP streams and measure the delay and jitter of the streams under various network conditions. The packet sizes and traffic mixes for these streams, and more details on the Cisco IP SLA, are provided in the "Evaluation Tools" section. The remainder of this section examines the testing methodology and the results of the vendor networks being tested. The goal of this objective set of tests was to determine if network performance would meet the requirements for voice and video quality under heavily congested conditions.

Three test scenarios were developed to ensure that the objectives were met. The Cisco Service Assurance Agent (SAA) generates traffic in both directionssource to destination and destination to source.

In the first scenario, the IP SLA was used to establish a baseline with a typical mix of voice and video streams in an uncongested network and without QoS enabled. The IP SLA measured both delay and jitter in each direction.

In the second scenario, an additional traffic load was added to the network such that the load exceeded the capacity in the source-to-destination direction, causing congestion at the egress port. It should be noted that the port speed of the egress port was reduced to exacerbate this condition. The New York site was the destination under test. Although one network appeared to perform better than the others, none of the networks provided acceptable results under congested conditions (without QoS enabled).

In the third scenario, the networks again were tested with a baseline voice and video traffic load and an additional traffic load so that the network capacity was exceeded. The traffic load was identical in all respects to the preceding scenario. The difference in this scenario was that QoS was enabled in the networks.

Congestion, QoS, and Load Testing

QoS and load tests were designed to objectively test the service providers' networks under oversubscribed conditions. The goals of the tests were to measure the benefits of enabling QoS capabilities and to provide objective metrics for comparing the service provider networks. Test Transmission Control Protocol (TTCP) generated and measured the data streams for the objective load tests. (See the "Evaluation Tools" section for more details on TTCP.) It was established during the previous test phases that all three of the service provider networks functioned with essentially zero packet loss when tested within normal subscription parameters. Most service provider networks offer three or four CoSs. The CoS within the service provider network is determined by the amount of service a given queue receives relative to other, similar queues. For example, traffic in the silver CoS is in a higher-priority queue and receives more service than traffic in the bronze CoS, a lower-priority queue. If three CoSs are offered, they are frequently called the silver, bronze, and best-effort classes. If a fourth class is offered, the gold class, it is usually reserved for real-time traffic such as voice and video. The gold class, or real-time traffic, requires LLQ in which the queue is serviced exhaustively (within configurable limits) before other queues are serviced. The silver, bronze, and best-effort queues are serviced based on WFQ methods.

Queuing and the use of CoS to differentiate between traffic priorities are meaningful only when the network is operating under congested conditions.

First Scenario

DefenseCo's first test scenario was used to establish a baseline without QoS in an uncongested network for each of the service providers. Three TTCP streams were generated between Los Angeles and New York. As expected, none of the streams received priority queuing, and all three streams finished with a very minimal difference in transfer time. All three service provider networks performed similarly.

Second Scenario

The second test scenario tested the benefit of QoS with traffic differentiated into two CoSs between two locations. Two TTCP streams were enabled between Los Angeles and New York. QoS was enabled in the service provider networks to give one of the TTCP streams a higher priority. As expected, the TTCP stream with the higher priority finished significantly ahead of the lower-priority stream.

Third Scenario

The third test scenario introduced additional congestion at the network's egress port. To accomplish this task, the second test scenario was rerun with an additional TTCP stream generated between Dallas and New York. The total bandwidth of the three TTCP streams (two from Los Angeles to New York and one from Dallas to New York) exceeded the egress bandwidth in New York. The TTCP stream from Dallas to New York was configured as best-effort and was generated primarily to ensure that the egress port in New York would be tested under highly congested conditions and with the best-effort CoS in the traffic mix. One of the TTCP streams from Los Angles to New York was configured to be in the silver priority queue. The second stream was configured for the bronze priority queue. Although the results varied between the service provider networks, all three networks clearly demonstrated the benefits of QoS for the two streams from Los Angeles to New York.

Subjective Measures

The objective tests previously performed were designed to test the service provider networks under specific conditions of failure, congestion, and adverse conditions. In some cases, the traffic mix was selected to resemble a typical mix of voice and video traffic. In other cases, it was selected to stress the QoS environment. Traffic-generation tools were used to ensure that the tests were homogenous across service providers, to allow the tests to be set up and performed quickly, and to provide accurate measurements that could be compared between the service provider networks. The subjective tests had several purposes. Simulations of real traffic are not a perfect representation, and subtle differences may cause undetected flaws to surface. Voice and video equipment requires signaling that is not present in load simulations to set up and tear down connections. Finally, ensuring interoperability of the voice and video equipment with the service provider networks was a critical concern.

The subjective tests used actual voice and video equipment in configurations as close as possible to the anticipated production environment. IP telephony tests were implemented with a variety of equipment, including gateways and telephones, all using the G.711 codec. Video testing was based on the H.323 standard, using one Polycom Viewstation codec at each of the test locations. The video bandwidth stream was limited to 128 kbps. Each of the service provider networks was again subjected to three test scenarios.

First, the network was tested without QoS enabled and with no test traffic other than the voice and video. This test established a baseline for voice and video performance in the service provider network. In this test scenario, two of the three service provider networks performed well, with video jitter measured in the 60-ms range. The third service provider network experienced problems resulting in unacceptable video quality.

In the second test scenario, QoS was not enabled, and in addition to the voice and video traffic, the network was subjected to a heavy simulated traffic load. This test ensured that the offered load would congest the service provider network. As expected, this test scenario resulted in unacceptable video quality in all three service provider networks. Although audio quality was acceptable in one of the service provider networks, it was less than business-quality in the remaining networks. Video jitter appeared to remain constant at approximately 60 ms, but all networks experienced heavy packet losses resulting in poor quality for the Real-Time Transport Protocol (RTP) streams.

In the third test scenario, QoS was enabled, and the service provider networks were subjected to both the voice and video traffic and a heavy simulated traffic load.

This test compared the performance of voice and video in a congested network with QoS enabled to the performance established in the baseline tests. In the final test scenario, all three service provider networks demonstrated the benefits of QoS. However, there were notable differences. One service provider network performed very close to the baseline established in the uncongested network tests. Another service provider displayed a slight degradation in quality but was within acceptable levels. The final service provider network appeared to perform well but failed on multiple occasions. Without further investigation, these differences were attributed to the tuning of QoS parameters in the service provider networks and differences in the internal design of the service provider networks.

Vendor Knowledge and Technical Performance

In addition to the evaluation tests and subjective tests, DefenseCo also rated each of the service providers' technical performance. The rating for technical performance was based on the service providers' ability to make changes in their network when moving from a non-QoS configuration to one that provided QoS. It also was based on the ability to resolve problems and provide technical information when problems occurred. In rating the three service providers, DefenseCo found that one clearly exhibited superior knowledge and had more experience in working with MPLS network design. Although it was more knowledgeable and experienced, this particular service provider's backbone performance was less robust than other providers and did not fully satisfy DefenseCo's requirements. Additionally, this service provider's business model was based on providing an end-to-end managed service and did not meet DefenseCo's needs for cost-and-service flexibility. Of the two remaining service providers, one demonstrated a slightly higher level of technical expertise and was also significantly more proactive in meeting DefenseCo's requirements. Both in troubleshooting network problems and in explaining possible network solutions, this service provider was more responsive to DefenseCo's requests. In addition, this service provider's network provided an additional CoS (see the earlier section "Congestion/QoS and Load Testing" for more details) and performed slightly better for voice and video when QoS was enabled. This service provider was selected to enter contract negotiations.

Evaluation Tools

This section describes the tools DefenseCo used in evaluating the test networks. The test results presented in the previous sections were based largely on the tools described here. The Cisco IP SLA is an integral part of Cisco IOS. The IP SLA can be configured to monitor performance between two routers by sending synthesized packets and measuring jitter and delay. Synthesized packets of specified sizes are sent at measured intervals. The receiving end detects variations from the specified interval to determine deviation caused in the network. Some tests also require the network time between the routers to be tightly synchronized, based on the Network Time Protocol (NTP). NTP is required for calculating network performance metrics. The IP SLA must be enabled in the routers on both ends of the network under test.

To use the IP SLA for testing, DefenseCo first profiled the actual network traffic. Three specific streams were considered: VoIP, the audio stream from an H.323 session, and the video stream from an H.323 session.

To simulate these streams using the IP SLA, DefenseCo first monitored the actual RTP streams for voice and video over IP. After some calculations to smooth the streams and determine an average, DefenseCo determined that the following packet sizes and intervals were the most appropriate for simulating its traffic loads:

  • VoIP 64-byte packets at 20-ms intervals

  • H.323 voice stream 524-byte packets at 60-ms intervals

  • H.323 video stream 1000-byte packets at 40-ms intervals

The IP SLA synthesized streams were used to produce audio and video test streams and to collect test measurements for objective test phases measuring delay and jitter outlined in this chapter. Actual audio and video test streams were used for subjective test phases. After selecting a network service provider, DefenseCo continued to use the Cisco SAA at strategic locations in the network to monitor network performance and to ensure that the service provider met the SLAs. The continued IP SLA monitoring also provides the IT staff with a troubleshooting tool for rapidly isolating problems within the enterprise area of the network.

TTCP

TTCP is a downloadable utility originally written by Mike Muuss and Terry Slattery. Muuss also authored the first implementation of ping. The TTCP program, originally developed for UNIX systems, has since been ported to Microsoft Windows as well. TTCP is a command-line sockets-based benchmarking tool for measuring TCP and User Datagram Protocol (UDP) performance between two systems. For more information on the TTCP utility, see http://www.pcausa.com/Utilities/pcattcp.htm. TTCP was used to generate TCP streams for QoS differentiation testing. Although it is recognized that under normal operating conditions QoS would frequently be applied to UDP streams, transferring the same file using multiple concurrent TTCP streams at differing levels of QoS provided a measurable way of testing and verifying QoS operation in the service provider's network.

The TTCP tests are essentially a "horse race." Multiple streams are started concurrently. Without QoS, they should finish in a fairly even "race." When QoS is introduced, the "horses" are handicapped based on the QoS parameters applied to the stream. If QoS is functioning properly, the "horses" should finish in "win," "place," and "show" order, with the "margin of victory" based on the parameter weights for each of the selected streams. DefenseCo used TTCP to first establish a baseline for the vendor network without QoS enabled. The vendor was then asked to enable QoS, and the baseline tests were rerun and the differences measured.

Lessons Learned

At the conclusion of the testing, the primary lessons learned were in the areas of "engineering for the other 10 percent" and how to handle the transition and implementation. When engineering for the other 10 percent, all large networks have a small percentage of traffic that requires special handling. In the DefenseCo network, special traffic resulted from two main categories:

  • Traffic routed based on the default route (Internet destinations)

  • Extranet traffic (from acquisitions and vendor network interconnections)

In the SMDS network, routing for these problems was resolved with Layer 2 solutions. Although the traffic requiring special handling was a small percentage of the overall traffic, an economical, manageable, low-maintenance solution was required.

Default route traffic was a problem because DefenseCo had multiple connections to the Internet through different Internet service providers (ISPs).

Multiple Internet connections were required to ensure redundancy to the Internet and to maintain adequate response time for users. The specific problem was how multiple default routes could be advertised into the MPLS network. The PE router would include only one default route in its routing table, and the selection would be unpredictable. Traffic from a given source might be routed to the most expensive egress to the Internet. Extranet traffic presented a different issue. Extranet routes are advertised in the DefenseCo intranet. However, because of certain contractual restrictions, not all sites within the DefenseCo intranet can use the extranet routes to the extranet site. DefenseCo sites with these restrictions are required to route to extranet sites through the Internet. A deterministic capability for different sites to take different routes to the same destination was required. To resolve special routing issues, DefenseCo solicited input from the selected service provider, Cisco, and its own engineering staff. After considering and testing several potential solutions, DefenseCo decided on a solution using one-way generic routing encapsulation (GRE) tunnels.

The Internet hub sites in the DefenseCo network act as a CE to the MPLS network and provide firewall and address translation protection for the connection to the Internet. The solution uses one-way GRE tunnels, which are configured at each remote site, pointing to one of the Internet hub site CE routers. The tunnels are initiated at the remote-site CE routers and terminate at the hub-site CE routers. Remote sites have static default routes pointing into the tunnel. The tunneled traffic is unencapsulated at the hub-site CE router and is routed to the Internet based on the hub-site default route. Return-path traffic (from the Internet) takes advantage of asymmetric routing. It is routed directly onto the MPLS backbone, not tunneled, and it follows the normal routing path to the destination IP address.

This solution has several advantages for DefenseCo. It has a minimal cost. The GRE tunnels through the MPLS backbone add very little cost to the backbone. The CE routers and the Internet hub-site routers are configured and maintained by DefenseCo. Routes can be changed or modified without contacting the service provider (as multiple MPLS VPNs would require). Finally, performance through the GRE tunnels is predictable, based on the three centralized Internet hub sites. The major disadvantage of this solution is that the GRE tunnel interface does not go down. Route flap can occur if the route to the Internet hub site is lost. The switchover to the backup tunnel requires manual intervention. Although this solution efficiently solves DefenseCo's problem with default routing, it is not an ideal solution. DefenseCo would have preferred a native MPLS solution to building GRE tunnels through the MPLS network. GRE tunnels appeared to be the best compromise. DefenseCo is actively reviewing alternatives and other special routing requirements.

Transition and Implementation Concerns and Issues

DefenseCo signed a contract with the service provider selected through the evaluation process. An aggressive transition schedule was executed. After four months, DefenseCo has transitioned approximately 25 to 30 percent of its U.S. sites from the SMDS network to the MPLS network.

During this period of aggressive installations, most problems resulted not from the MPLS network, but from lower-layer problems. Missed dates for circuit installations, faulty circuit installations, and problems at Layer 2 when Frame Relay or ATM was used for access all contributed to delays during this critical period. DefenseCo also found that the operational support from the service provider was much less reactive than the engineering team assigned during the evaluation process.

Post-Transition Results

Thus far, the MPLS network has met DefenseCo's expectations. DefenseCo is moving forward with plans to migrate and expand videoconferencing and other services on the MPLS network. In retrospect, QoS was a major focus during the evaluation phase.

Given the experience of working with the service providers, going through the evaluation, and understanding the results, the DefenseCo engineers felt that if they faced a similar evaluation today, they would allocate more time to the following areas:

  • The transition from a Layer 2 network to a Layer 3 network severely limited DefenseCo's control in influencing neighbors and routing. During the transition, rapid solutions were devised for "oddball" routing situations. DefenseCo believes that more of the "oddball" routing situations should have been addressed during the evaluation period, and perhaps more optimal solutions designed.

  • As an enterprise, DefenseCo had limited exposure to BGP before the transition to and implementation of the MPLS network. During the evaluation, DefenseCo determined that BGP was superior to static routes and Routing Information Protocol (RIP) in its ability to control routing. Understanding how to influence Layer 3 routing using BGP metrics while transitioning to the MPLS network introduced additional complexity during this critical period. DefenseCo believes that closer examination of this area during the evaluation phase would have been beneficial.

  • The MPLS network has capabilities that exceed DefenseCo's requirements. Understanding these capabilities may lead to better-optimized and lower-cost solutions. DefenseCo believes that more focus on these areas during the evaluation may have been beneficial.

DefenseCo gained unexpected benefits from the transition in several areas:

  • In the SMDS network, the core backbone architecture was almost a full-mesh environment. DefenseCo was running EIGRP.

  • Given the scale of the DefenseCo network, frequent changes occurred in the backbone due to circuit failures and other outages. Requiring DefenseCo to migrate to a core distribution architecture would have been cost-prohibitive for the customer because the service provider is providing the core routers. A key lesson is to minimize IGP convergence issues in the backbone as part of the service migration strategy.

    This factor requires that the service provider understand the customer network. In the MPLS network, each CE router has only the PE router as a neighbor. BGP peering has greatly improved routing stability. The customer truly benefited from migrating to an MPLS-based solution. The underlying transport for the MPLS network is a high-speed all-optical backbone. By doing some network tuning during the transition phase, the service provider was able to significantly reduce response time and jitter and improve service well beyond the capabilities of the SMDS network.




Selecting MPLS VPN Services
Selecting MPLS VPN Services
ISBN: 1587051915
EAN: 2147483647
Year: 2004
Pages: 136

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net