Ethernet


As you have seen for storage and DWDM, Ethernet is an advanced technology. Before launching into a discussion of its use over MSPP, let's take a look at its brief history. One point to clarify at the onset is that Ethernet has been around for several decades. Ethernet itself is not a "new" technology, but its use over MSPP is an emerging technique in delivering Ethernet transport.

A Brief History of Ethernet

Personal computers hadn't proliferated in any significant way when researchers and developers started trialing what would later turn out to be the next phase of the PC revolution: connecting these devices to a network. The year 1977 is widely recognized as the PC's big arrival; however, Ethernetthe technology that today attaches millions of PCs to LANswas invented four years earlier, in the spring of 1973.

The source of this forethought was Xerox Corporation's Palo Alto Research Center (PARC). In 1972, PARC researchers were working on both a prototype of the Alto computera personal workstation with a graphical user interfaceand a page-per-second laser printer. The plan was for all PARC employees to have computers and to tie all the computers to the laser printer.

The task of creating the network fell to Bob Metcalfe, an MIT graduate who had joined Xerox that year. As Metcalfe says, the two novel requirements of this network were that it had to be very fast to accommodate the laser printer and that it had to connect hundreds of computers.

By the end of 1972, Metcalfe and other PARC experts had completed an experimental 3-Mbps PC LAN. The following year, Metcalfe defined the general principles of what became the first PC LAN backbone. Additionally, this team developed the first PC LAN board that can be installed inside a PC to create a network.

Metcalfe eventually named this PC LAN backbone as Ethernet, based on the idea of "lumeniferous ether," which is the medium that scientists once thought carried electromagnetic waves through space.

Ethernet defines the wire and chip specifications of PC networking, along with the software specifications regarding how data is transmitted. One of the pillars is its system of collision detection and recovery capability, called carrier sense multiple access collision detect (CSMA/CD), which we discuss later in this chapter.

Metcalfe worked feverishly to get Intel Corp., Digital, and Xerox to agree to work on using Ethernet as the standard way of sending packets in a PC network. Thus, 3Com (three companies) Corporation was born. 3Com introduced its first product, EtherLink (the first PC Ethernet network interface card), in 1982. Early 3Com customers included Trans-America Corp. and the White House.

Ethernet gained popularity in 1983 and was soon named an international standard by the Institute of Electrical and Electronics Engineers, Inc. (IEEE). However, one major computer force did not get on board: IBM, which developed a very different LAN mechanism called Token Ring. Despite IBM's resistance, Ethernet went on to become the most widely installed technology for creating LANs. Today it is common to have Fast Ethernet, which runs at 100 Mbps, and GigE, which operates at 1 Gbps. Most desktop PCs in large corporations are running at 10/100 Mbps. The network senses the speed of the PC card and automatically adjusts to it, which is known as autosensing.

Fast Ethernet

The Fast Ethernet (FE) standard was officially ratified in the summer of 1995. FE is ten times the speed of 10BaseT Ethernet. Fast Ethernet (also known as 100BaseT) uses the same CSMA/CD protocol and Category 5 cabling support as its predecessor, while offering new features, such as full-duplex operation and autonegotiation. FE calls for three types of transmissions over various physical media:

  • 100BaseTX Is the most common application whose cabling is similar to 10BaseT. This uses Category 5rated twisted-pair copper cable to connect various data-networking elements, using an RJ-45 jack.

  • 100BaseFX Used predominately to connect switches either between wiring closets or between buildings using multimode fiber-optic cable.

  • 100BaseT4 Uses two more pairs of wiring, which enables Fast Ethernet to operate over Category 3rated cables or above.

GigE

The next evolutionary leap for Ethernet was driven by the Gigabit Ethernet Alliance, which was formed in 1996 and ratified in the summer of 1999. It specified a physical layer using a mixture of established technologies from the original Ethernet specification and the ANSI X3T11 FC specification:

  • 1000BaseX A standard based on the FC physical layer. It specifies the technology for connecting workstations, supercomputers, storage devices, and other devices with fiber-optic and copper shielded twisted-pair (STP) media based on the cable distance.

  • 1000BaseT A GigE standard for long-haul copper unshielded-twisted pair (UTP) media.

Because it is similar to 10-Mbps and 100-Mbps Ethernet, GigE offers an easy, incremental migration path for bandwidth requirements. IEEE 802.3 framing and CSMA/CD are common among all three standards.

The common framing and packet size (64- to 1518-byte packets) is key to the ubiquitous connectivity that 10-/100-/1000-Mbps Ethernet offers through LAN switches and routers in the WAN. Figure 3-24 shows the GigE frame format.

Figure 3-24. GigE Frame Format


Ethernet Emerges

Why did Ethernet emerge as the victor alongside competitors such as Token Ring?

Since its infancy, Ethernet has thrived primarily because of its flexibility and ease of implementation. To say "LAN" or "network card" is understood to mean "Ethernet." With the capability to use existing UTP telephone wire for 10 Mbps Ethernet, the path into the home and small office was paved for its long-term proliferation.

The CSMA/CD Media Access Control (MAC) protocol defines the rules and conventions for access in a shared network. The name itself implies how the traffic is controlled.

  1. First, devices attached to the network check, or sense, the carrier (wire) before transmitting.

  2. The device waits before transmitting if the media is in use. ("Multiple access" refers to many devices sharing the same network medium.)

    If two devices transmit at the same time, a collision occurs. A collision-detection mechanism retransmits after a random timer "times out" for each device.

With switched Ethernet, each sender and receiver pair provides the full bandwidth. Interface cards or internal circuitry are used to deliver the switched Ethernet signaling and cabling conventions specify the use of a transceiver to attach a cable to the physical network medium. Transceivers in the network cards or internal circuitry perform many of the physical layer functions, including carrier sensing and collision detection.

Let's take a look at the growth of Ethernet beyond the LAN and into the metropolitan-area network (MAN), which is enabled by MSPPs.

Ethernet over MSPP

Today's evolving networks are driven by the demand for a wide variety of high-bandwidth data services. Enterprises must scale up and centralize their information technology to stay competitive. Service providers must increase capacity and service offerings to meet customer requirements while maintaining their own profitability. Both enterprises and service providers need to lower capital and operating expenditures as they evolve their networks to a simplified architecture. Additionally, service providers must accelerate time to market for the delivery of value-added services, and enterprises must accelerate and simplify the process of adding new users. Increasingly, service providers and enterprises are looking to Ethernet as an option because of its bandwidth capabilities, perceived cost advantages, and ubiquity in the enterprise.

The vast fiber build-out over the last few years has caused an emergence of next-generation services in the metropolitan market, including wavelength services and Ethernet services. As discussed, Ethernet providers can deploy a single interface type and then remotely change the end user's bandwidth profile without the complexity or cost associated with Asynchronous Transfer Mode (ATM) and at higher speeds than Frame Relay. ATM requires complex network protocols, including Private Network Node Interface (PNNI), to disseminate address information; LAN Emulation (LANE), which does not scale at all in the WAN; and RFC 1483 ATM Bridged Encapsulation, which works only for point-to-point circuits. On the other hand, whereas Frame Relay is simple to operate, its maximum speed is about 50 Mbps. Ethernet scales from 1 Mbps to 10 Gbps in small increments.

Because of Ethernet's cost advantages, relative simplicity, and scalability, service providers have become very interested in offering it. Service providers use it for hand-off between their network and the enterprise customer, and for transporting those Ethernet frames through the service provider network.

Many, if not most, service providers today use a transport layer made up of SONET or SDH. Therefore, any discussion of Ethernet service offerings must include a means of using the installed infrastructure. An MSPP is a platform that can transport traditional TDM traffic, such as voice, and also provide the foundational infrastructure for data traffic, for which Ethernet is optimized. The capability to integrate these capabilities allows the service provider to deploy a cost-effective, flexible architecture that can support a variety of different serviceshence, the emergence of Ethernet over MSPP.

Why Ethernet over MSPP?

Ethernet over MSPP solutions enable service providers and enterprises to take advantage of fiber-optic capabilities to provide much higher levels of service density. This, in turn, lowers the cost per bit delivered throughout the network. MSPP solutions deliver profitability for carriers and cost reduction for enterprises through the following:

  • Backward compatibility with legacy optical systems, supporting all restoration techniques, topologies, and transmission criteria used in legacy TDM and optical networks

  • Eliminated need for overlay networks while providing support at the network edge for all optical and data interfaces, thus maximizing the types of services offered at the network edge

  • Use of a single end-to-end provisioning and management system to reduce management overhead and expenditure

  • Rapid service deployment

A significant advantage of Ethernet over MSPP is that it eliminates the need for parallel and overlay networks. In the past, services such as DS1s and DS3s, Frame Relay, and ATM required multiple access network elements and, in many cases, separate networks. These services were overlaid onto TDM networks or were built as completely separate networks. Multiple overlay networks pose many challenges:

  • Separate fiber/copper physical layer

  • Separate element and network management

  • Separate provisioning schemes

  • Training for all of the above

  • An overlay workforce

All of these come at a significant cost so that, even if a new service's network elements are less expensive than additional TDM network elements, the operational expenses far outweigh the capital expenses saved by buying less expensive network elements.

Therefore, if you want to provide new Ethernet data services, you have to build an overlay network, as shown in Figure 3-25.

Figure 3-25. An Additional Network Built Out to Accommodate New Services, Sometimes Called an "Overbuild"


MSPP allows for one simple integrated network, as shown in Figure 3-26.

Figure 3-26. An Integrated Network Built Out over MSPP


Another important feature of Ethernet over MSPPs is that MSPPs support existing management systems. There are virtually as many management systems as there are carriers. These systems can include one or more of the following: network element vendor systems, internally developed systems, and third-party systems. The key is flexibility. The MSPP must support all the legacy and new management protocols, including Translation Language-1 (TL-1), Simple Network Management Protocol (SNMP), and Common Object Request Broker Architecture (CORBA). SNMP and CORBA are present in many of today's non-MSPP network elements, but TL-1 is not. TL-1, which was developed for TDM networks, is the dominant legacy protocol and is a must-have for large service providers delivering a variety of services.

The final key advantage of Ethernet over MSPPs is that carriers can offer rapid service deployment in two ways. The first is the time it takes to get the Ethernet service network in place. Most service providers already have a physical presence near their customers. However, if their existing network elements are not MSPPs, they have to build the new overlay network before they can turn up service. This can take months. With an MSPP, adding a new service is as simple as adding a new card to the MSPP, so network deployment can go from months to virtually on-demand. Furthermore, because many MSPPs support DWDM, as the number of customers grows, the bandwidth back to the central office can be scaled gracefully by adding cards instead of pulling new fiber or adding an overlay DWDM system.

Metro Ethernet Services

As discussed in Chapter 1, "Market Drivers for Multiservice Provisioning Platforms," several major deployment models exist for Ethernet services:

  • Ethernet Private Line Service

  • Ethernet Wire Service

  • Ethernet Relay Service

  • Ethernet Multipoint Service

  • Ethernet Relay Multipoint Service

Here is a brief review of these Ethernet services.

Ethernet Private Line Service

Ethernet Private Line (EPL) Service, shown in Figure 3-27, is a dedicated, point-to-point, fixed-bandwidth, nonswitched link between two customer locations, with guaranteed bandwidth and payload transparency end to end. The EPL service is ideal for transparent LAN interconnection and data center integration, for which wire-speed performance and VLAN transparency are important. Although TDM and OC-N based facilities have been the traditional means of providing Private Line Service, the EPL service is Ethernet over SONET.

Figure 3-27. Ethernet Private Line Service Using MSPP over DWDM


Traditionally, Private Line Services (PLSs) have been used for TDM applications such as voice or data, and they do not require the service provider to offer any added value, such as Layer 3 (network) or Layer 2 addressing. An Ethernet PLS is a point-to-point Ethernet connection between two subscriber locations. It is symmetrical, providing the same bandwidth performance for sending or receiving. Ethernet PLS is equivalent to a Frame Relay permanent virtual circuit (PVC), but with a greater range of bandwidth, the capability to provision bandwidth in increments, and more service options. Additionally, it is less expensive and easier to manage than a Frame Relay PVC because the customer premises equipment (CPE) costs are lower for subscribers, and subscribers do not need to purchase and manage a Frame Relay switch or a WAN router with a Frame Relay interface.

Ethernet Wire Service

Like the EPL Service, the Ethernet Wire Service (EWS), depicted in Figure 3-28, is a point-to-point connection between a pair of sites, sometimes called an Ethernet virtual circuit (EVC). EWS differs from EPL in that it is typically provided over a shared, switched infrastructure within the service-provider network and can be shared between one or more other customers. The benefit of EWS to the customer is that it typically is offered with a wider choice of committed bandwidth levels up to wire speed. To help ensure privacy, the service provider segregates each subscriber's traffic by applying VLAN tags on each EVC.

Figure 3-28. Ethernet Wire Service with Multiple VLANs over SONET


EWS is considered a port-based service. All customer packets are transmitted to the destination port transparently, and the customers' VLAN tags are preserved from the customer equipment through the service-provider network. This capability is called all-to-one bundling.

Figure 3-28 shows EWS over MSPP.

Ethernet Relay Service

Ethernet Relay Service (ERS), shown in Figure 3-29, enables multiple instances of service to be multiplexed onto a single customer User-Network Interface (UNI) so that the UNI can belong to multiple ERS. The resulting "multiplexed UNI" supports point-to-multipoint connections between two or more customer-specified sites, similar to Frame Relay service. ERS also provides Ethernet access to other Layer 2 services (Frame Relay and ATM) so that the service provider's customers can begin using Ethernet services without replacing their existing legacy systems.

Figure 3-29. Ethernet Relay Service, XC


ERS is ideal for interconnecting routers in an enterprise network, and for connecting to Internet service providers (ISPs) and other service providers for dedicated Internet access (DIA), virtual private network (VPN) services, and other value-added services. Service providers can multiplex connections from many end customers onto a single Ethernet port at the point of presence (POP), for efficiency and ease of management. The connection identifier in ERS is a VLAN tag. Each customer VLAN tag is mapped to a specific Ethernet virtual connection.

Ethernet Multipoint Service

A multipoint-to-multipoint version of EWS, Ethernet Multipoint Service (EMS), shown in Figure 3-30, shares the same technical access requirements and characteristics. The service-provider network acts as a virtual switch for the customer, providing the capability to connect multiple customer sites and allow for any-to-any communication. The enabling technology is virtual private LAN service (VPLS), implemented at the network-provider edge (N-PE).

Figure 3-30. Ethernet Multipoint Service with Multiple VLANS over SONET


Ethernet Relay Multipoint Service

The Ethernet Relay Multipoint Service (ERMS) is a hybrid of EMS and ERS. It offers the any-to-any connectivity characteristics of EMS, as well as the service multiplexing of ERS. This combination enables a single UNI to support a customer's intranet connection, and one or more additional EVCs for connection to outside networks, ISPs, or content providers.

Table 3-3 summarizes the characteristics of metro Ethernet access solutions.

Table 3-3. Summary of Metro Ethernet Access Services

Service

EVC Type

CPE

Characteristics

EPL

P-to-P

Router

VLAN transparency, bundling

EWS

P-to-P

Router

VLAN transparency, bundling, Layer 2 Tunneling Protocol

ERS

P-to-P

Router

Service multiplexing

EMS

MP-to-MP

Router

VLAN transparency, bundling, Layer 2 Tunneling Protocol

ERMS

MP-to-MP

Router

Service multiplexing, VLAN transparency, bundling, Layer 2 Tunneling Protocol


The aforementioned services describe the way in which a service provider markets its Ethernet service, or even how an enterprise might deploy its own private service, but they do not provide any specification for the underlying infrastructure. Even though it is not necessary that the Ethernet services be deployed over an MSPP architecture, Figure 3-27 through Figure 3-30 showed these Ethernet services deployed over MSPPs that use either native SONET or DWDM for transport.

Two of the major Ethernet-over-SONET infrastructure architectures supported by Ethernet over MSPP are point to point, or SONET mapping, and resilient packet ring (RPR) (which is a type of multilayer switched Ethernet). Each configuration can be implemented in a BLSR, UPSR, or linear automatic protection switching (APS) network topology.

Point-to-Point Ethernet over MSPP

Point-to-point configurations over a BLSR or a linear APS are provided with full SONET switching protection. Point-to-point circuits do not need a spanning tree because the circuit has only two termination points. Therefore, the point-to-point configuration allows a simple circuit creation between two Ethernet termination points, making it a viable option for network operators looking to provide 10-/100-Mbps access drops for high-capacity customer LAN interconnects, Internet traffic, and cable modem traffic aggregation. This service is commonly referred to as EPL.

SONET Mapping

Mapping involves encapsulating the Ethernet data directly into the STS bandwidth of SONET and transporting the Ethernet within the SONET payload around the ring from one MSPP to another, where either it is either dropped or it continues to the next MSPP node. In this application, for example, a 10-Mb Ethernet circuit could be mapped directly into an STS-1, a 100-Mb circuit could be mapped into an STS-3c, and a GigE circuit could be mapped into 24 STSs.

However, STS bandwidth scaling does allow for rudimentary statistical multiplexing and bandwidth oversubscription. This involves mapping two or more Ethernet circuits into a given STS-Nc payload. For example, assume that two customers, a school district and a local cable provider, deliver cable modembased residential subscriber services. Both customers are provided a 100-Mbps interface to their backbone switch and cable modem terminating device, respectively. Because of time-of-day demand fluctuations, neither customer is using the full provided bandwidth simultaneously. As such, the service provider might choose to place traffic from both customers onto a single STS-3c circuit across the SONET backbone. (Note that traffic is logically separated with IEEE 802.1Q tags placed at port ingress.)

Previously, each 100-Mbps customer circuit consumed a full OC-3c (155 Mbps) of bandwidth across the network. Through STS bandwidth scaling, however, one OC-3c pipe has been preserved. This enhances service-provider profitability by allowing the service provider to generate additional revenue by delivering additional data and TDM services with no effect on CapEx.

Limitations of Point-to-Point Ethernet over SONET

Ring topology is a natural match for SONET-based TDM networks that constitute the bulk of existing metro-network infrastructure. However, there are well-known disadvantages to using SONET for transporting data traffic (or point-to-point SONET data solutions, such as Ethernet over SONET).

SONET was designed for point-to-point, circuit-switched applications (such as voice traffic), and most of its limitations stem from these origins. These are some of the disadvantages of using SONET rings for data transport:

  • Fixed circuits SONET provisions point-to-point circuits between ring nodes. Each circuit is allocated a fixed amount of bandwidth that is wasted when not used. For the SONET network that is used for access, each node on the ring is allocated only one quarter of the ring's total bandwidth (say, OC-3 each on an OC-12 ring). That fixed allocation puts a limit on the maximum burst traffic data-transfer rate between endpoints. This is a disadvantage for data traffic, which is inherently bursty.

  • Waste of bandwidth for meshing If the network design calls for a logical mesh, the network designer must divide the OC-12 of ring bandwidth into n(n 1)/2 circuits, where n is the number of nodes provisioned. Provisioning the circuits necessary to create a logical mesh over a SONET ring not only is difficult, but it also results in extremely inefficient use of ring bandwidth. Because the amount of data traffic that stays within metro networks is increasing, a fully meshed network that is easy to deploy, maintain, and upgrade is becoming an important requirement.

  • Multicast traffic On a SONET ring, multicast traffic requires each source to allocate a separate circuit for each destination. A separate copy of the packet is sent to each destination. The result is multiple copies of multicast packets traveling around the ring, wasting bandwidth.

  • Wasted protection bandwidth Typically, 50 percent of ring bandwidth is reserved for protection. Although protection is obviously important, SONET does not achieve this goal in an efficient manner that gives the provider the choice of how much bandwidth to reserve for protection.

Ethernet over a Ring?

Will Ethernet over a ring improve upon point-to-point Ethernet over SONET? Ethernet does make efficient use of available bandwidth for data traffic and offers a far simpler and inexpensive solution for data traffic. However, because Ethernet is optimized for point-to-point or meshed topologies, it does not make the most of the ring topology.

Unlike SONET, Ethernet does not take advantage of a ring topology to implement a fast protection mechanism. Ethernet generally relies on the Spanning Tree Protocol to eliminate all loops from a switched network, which is notoriously slow. Even though the Spanning Tree Protocol can be used to achieve path redundancy, its comparatively slow recovery mechanism requires the failure condition to be propagated serially to each upstream node after a fiber cut. Link aggregation (802.1ad) can provide a link-level resiliency solution, but it is comparatively slow (about 500 ms vs. 50 ms) and is not appropriate for providing path-level protection.

Ethernet is also not good at creating an environment for equitable sharing of ring bandwidth. Ethernet switches can provide link-level fairness, but this does not necessarily or easily translate into overall fairness in bandwidth allocation. A simpler and more efficient method comes from taking advantage of the ring topology to create a universal equity plan for bandwidth allocation.

As we've discussed, neither SONET nor Ethernet is ideal for handling data traffic on a ring network. SONET does take advantage of the ring topology, but it does not handle data traffic efficiently and wastes ring bandwidth. Although Ethernet is a natural fit for data traffic, it is actually difficult to implement on a ring and does not make the most of the ring's capabilities.

One final note before we venture into our next topic of RPR: The Rapid Spanning Tree Protocol (RSTP) (802.1w) is another step in the evolution of the Ethernet over SONET that evolved from the Spanning Tree Protocol (802.1d standard) and provides for faster spanning-tree convergence after a topology change. The terminology of STP (and the parameters) remains the same in RSTP. This was used as a means of ring convergence before the development of RPR, which we discuss next.

Resilient Packet Ring

Resilient packet ring is an emerging network architecture designed to meet the requirements of a packet-based metropolitan-area network. Unlike incumbent architectures based on Ethernet switches or SONET add/drop muxes (ADMs), RPR approaches the metro bandwidth limitation problem differently. RPR provides more than just mere SONET mapping of Ethernet over a self-healing, "resilient" ring.

This problem of effectively managing a shared resource, the fiber ring, which needs to be shared across thousands of subscribers in a metro area, is most efficiently solved at the MAC layer of the protocol stack.

By creating a MAC protocol for ring networks, RPR attempts to find a fundamental solution to the metro bottleneck problem. Other solutions attempt to make incremental changes to existing products but do not address the fundamental problem and, hence, are inefficient. Neither SONET nor Ethernet switches address the need for a MAC layer designed for the MAN. SONET employs Layer 1 techniques (point-to-point connections) to manage capacity on a ring. Ethernet switches rely on Ethernet bridging or IP routing for bandwidth management. Consequently, the network is either underutilized, in the case of SONET, or nondeterministic, in the case of Ethernet switches.

Instead of being a total replacement of SONET and Ethernet, RPR is complementary to both. Both SONET and Ethernet are excellent Layer 1 technologies. Whereas SONET was designed as a Layer 1 technology, Ethernet has evolved into one. Through its various evolutions, Ethernet has transformed from the CSMA/CD shared-media network architecture to a full-duplex, point-to-point switched network architecture.

Most of the development in Ethernet has been focused on its physical layer, or Layer 1, increasing the speed at which it operates. The MAC layer has been largely unchanged. The portion of the MAC layer that continues to thrive is the MAC frame format. RPR is a MAC protocol and operates at Layer 2 of the OSI protocol stack. By design, RPR is Layer 1 agnostic, which means that RPR can run over either SONET or Ethernet. RPR enables carriers and enterprises to build more scalable and efficient metro networks using SONET or Ethernet as physical layers.

RPR Characteristics

RPR has several unique attributes that make it an ideal platform for delivery of data services in metro networks.

Resiliency

The Ethernet traffic is sent in both directions of a dual counter-rotating ring to achieve the maximum bandwidth utilization on the SONET/SDH ring. Ring failover is often described as "self-healing" or "automatic recovery." SONET rings can recover in less than 50 ms.

Sharing Bandwidth Equitably

SONET rings also have an innate advantage for implementing algorithms to control bandwidth use. Ring bandwidth is a public resource and is susceptible to being dominated by individual users or nodes. An algorithm that allocates the bandwidth in a just manner is a means of providing every customer on the ring with an equitable amount of the ring bandwidth, ideally without the burden of numerous provisioned circuits. A ring-level fairness algorithm can and should allocate ring bandwidth as a single resource. Bandwidth policies that can allow maximum ring bandwidth to be used between any two nodes when there is no congestion can be implemented without the inflexibility of a fixed circuit-based system such as SONET, but with greater effectiveness than point-to-point Ethernet.

Easier Activation of Services

A common challenge often experienced in data service customers is the time it takes for carriers to provision services. Installation, testing, and provisioning times can take anywhere from 6 weeks to 6 months for DS1 and DS3 services; services at OC-N rates can take even more time.

A significant portion of this delay in service lead times can be attributed to the underlying SONET infrastructure and its circuit-based provisioning model. Traditionally, the creation of an end-to-end circuit took numerous steps, especially before MSPP.

Initially the network technician identifies the circuit's physical endpoints to the operational support system. The technician must then configure each node within the ring for all the required circuits that will either pass through a node or continue around the ring. This provisioning operation can be time and labor intensive. MSPPs automate some of the circuit-provisioning steps. But the technician still needs to conduct traffic engineering manually to optimize bandwidth utilization on the ring. The technician must be aware of the network topology, the traffic distribution on the ring, and the available bandwidth on every span traversed by the circuit. Service provisioning on a network of Ethernet switches is improved because provisioning of circuits is not required through each node. However, circuit provisioning still occurs node by node. Additionally, if carriers want to deliver SLAs over the network, the network planner still needs to manually provision the network for the required traffic.

By comparison, an RPR system provides a very basic service model. In an RPR system, the ring functions as a shared medium. All the nodes on the ring share bandwidth on the packet ring. Each node has visibility into the capacity available on the ring. Therefore, circuit provisioning of a new service is much easier. There is no need for a node-by-node and link-by-link capacity planning, engineering, and provisioning exercise. The network operator simply identifies a traffic flow and specifies the QoS that each traffic type should get as it traverses the ring. Thus, there is no need for circuit provisioning because each node is aware of every other node on the ring, based on the MAC address.

Broadcast or Multicast Traffic Is Better Handled

RPRs are a natural fit for broadcast and multicast traffic. As already shown, for unicast traffic, or traffic from one entity to another, nodes on an RPR generally have the choice of stripping packets from the ring or forwarding them. However, for a multicast, the nodes can simply receive the packet and forward it, until the source node strips the packet. This means that multicasting or broadcasting a data packet requires that only one copy be sent around the ring, not n copies, where n is the number of nodes. This reduces the amount of bandwidth required by a factor of n.

Layer 1 Flexibility

The basic advantage of a packet ring is that each node can assume that a packet sent on the ring will eventually reach its destination node, regardless of which path around the ring has taken. Because the nodes identify themselves with the ring, only three basic packet-handling actions are needed: insertion (adding a packet into the ring), forwarding (sending the packet onward), and stripping (taking the packet off the ring). This decreases the magnitude of processing required for individual nodes to communicate with each other, especially as compared with a meshed network, in which each node has to decide which exit port to use for each packet as a part of the forwarding process.

RPR: A Multilayer Switched Ethernet Architecture over MSPP

The term multilayer switched Ethernet is used here because RPR goes beyond mere Layer 1 SONET payload mapping of Ethernet, a "mapper" approach, and uses Layer 2 and even Layer 3 features from the OSI reference model for data networking. This technology truly delivers on the promise of the MSPP and can be found on a single card. Cisco calls the card the ML Card for Multi Layer as a part of the ONS 15454 MSPP. This card supports multiple levels of priority of customer traffic that can be managed using existing operations support systems (OSS).

This multilayer switched design offers service providers and enterprises alike several key features and benefits:

  • The multilayer switched design brings packet processing to the SONET platform The benefit is that new services can be created around the notion of guaranteed and peak bandwidth, a feature that really enhances the service-provider business model.

  • The multilayer switched design offers the capability to create multipoint services This means that the provider can deploy the equivalent of a private-line service and a Frame Relay service out of the same transmission network infrastructure, thereby realizing significant cost savings.

  • The multilayer switched design delivers carrier-class services The key benefit is that the resiliency of the service is derived from the SONET/SDH 50-ms failover.

  • The multilayer switched design integrates into Transaction Language One (TL-1) and SNMP The key benefit is that these services can be created to a large extent within the existing service provider provisioning systems. Therefore, there is minimal disruption to existing business processes. Through an EMS multilayer switched design, Ethernet cards extend the data service capabilities of this technology, enabling service providers to evolve the data services available over their optical transport networks.

    The Cisco Systems multilayer switched design consists of two cards: a 12-port, 10/100BaseT module faceplate-mounted RJ-45 connectors, and a 2-port GigE module with two receptacle slots for field-installable, industry-standard SFP optical modules.

    Additionally, each service interface supports bandwidth guarantees down to 1 Mbps, enabling service providers to aggregate traffic from multiple customers onto shared network bandwidth, while still offering TDM or optical services from the same platform.

Q-in-Q

The multilayer switched design supports Q-in-Q, a technique that expands the VLAN space by retagging the tagged packets entering the service provider infrastructure. When a service provider's ingress interface receives an Ethernet frame from the end user, a second-level 802.1Q tag is placed in that frame, immediately preceding the original end-user 802.1Q tag. The service provider's network then uses this second tag as the frame transits the metro network. The multilayer switched card interface of the egress removes the second tag and hands off the original frame to the end customer. This builds a Layer 2 VPN in which traffic from different business customers is segregated inside the service provider network, yet the service provider can deliver a service that is completely transparent to the Layer 2 VLAN configuration of each enterprise customer.

Although Q-in-Q provides a solid solution for smaller networks, its VLAN ID limitations and reliance on the IEEE 802.1d spanning-tree algorithm make it difficult to scale to meet the demands of larger networks. Therefore, other innovations, such as Ethernet over MPLS (EoMPLS), must be introduced. As the name implies, EoMPLS encapsulates the Ethernet frames into an MPLS label switch path, which allows a Multiprotocol Label Switching (MPLS) core to provide transport of native Ethernet frames.

Several other important concepts related to Ethernet over SONET must be mentioned.

Virtual Concatenation VCAT

As synchronous transport signals and virtual containers (STSs/VCs) are provisioned, gaps can form in the overall flows. This is similar to a fragmented disk on a personal computer. However, unlike computer memory managers, TDM blocks of contiguous payload cannot be cut into fragments to fit into the unused TDM flow. For example, a concatenated STS-12c flow cannot be chopped up and mapped to 12 STs-1 flows. VCAT solves this shortfall by providing the capability to transmit and receive several noncontiguous STSs/VCs, fragments, as a single flow. This grouping of STSs/VCs is called a VCAT group (VCG).

VCAT drastically increases the utilization for Ethernet over TDM infrastructures. This enables carriers to accommodate more customers per metro area than without VCAT.

Carriers looking to reduce capital expenditures, while meeting the demands of data traffic growth and new service offerings, need to extract maximum value from their existing networks. Emerging mapper or framer technologies, such as VCAT and link capacity-adjustment scheme (LCAS), enable carriers to upgrade their existing SONET networks with minimal investment. These emerging technologies will help increase the bottom line of carriers by enabling new services through more rapid provisioning, increased scalability, and much higher bandwidth utilization when transporting Ethernet over SONET and packet over SONET data.

VCAT significantly improves the efficiency of data transport, along with the scalability of legacy SONET networks, by grouping the synchronous payload envelopes (SPEs) of SONET frames in a nonconsecutive manner to create VCAT groups. Traditionally, layer capacity formats were available only in contiguous concatenated groups of specific size. SPEs that belong to a virtual concatenated group are called members of that group. This VCAT method allows finer granularity for the provisioning of bandwidth services and is an extension of an existing concatenation method, contiguous concatenation, in which groups are presented in a consecutive manner and with gross granularity.

Different granularities of virtual concatenated groups are required for different parts of the network, such as the core or edge. VCAT applies to low-order (VT-1.5) and high-order (STS-1) paths. Low-order virtual concatenated groups are suitable at the edge, and the high-order VCAT groups are suitable for the core of the MAN.

VCAT allows for the efficient transport of GigE. Traditionally, GigE is transported over SONET networks using the nearest contiguous concatenation group size available, an OC-48c (2.488 Gbps), wasting approximately 60 percent of the connection's bandwidth. Some proprietary methods exist for mapping Ethernet over SONET, but they, too, are inefficient. With VCAT, 21 STS-1s of an OC-48 can be assigned for transporting one GigE. The remaining 27 STS-1s are still free to be assigned either to another GigE or to any other data client signal, such as ESCON, FICON, or FC.

VCAT improves bandwidth efficiency more than 100 percent when transporting clients such as GigE using standard mapping, or around 25 percent when compared to proprietary mapping mechanisms (for example, GigE over OC-24c). This suggests that carriers could significantly improve their existing networks' capacity by using VCAT. Furthermore, carriers gain scalability by increasing the use of the network in smaller incremental steps. In addition, the signals created by VCAT framers are still completely SONET, so a carrier needs to merely upgrade line cards at the access points of the network, not the elements in the core.

Whereas VCAT provides the capability to "right-size" SONET channels, LCAS increases the flexibility of VCAT by allowing dynamic reconfiguration of VCAT groups. Together the technologies allow for much more efficient use of existing infrastructure, giving service providers the capability to introduce new services with minimal investment.

LCAS

LCAS allows carriers to move away from the sluggish and inefficient provisioning process of traditional SONET networks and offers a means to incrementally enlarge or minimize the size of a SONET data circuit without impacting the transported data. LCAS uses a request/acknowledge mechanism that allows for the addition or deletion of STS-1s without affecting traffic. The LCAS protocol works unidirectionally, enabling carriers to provide asymmetric bandwidth. Thus, provisioning more bandwidth over a SONET link using LCAS to add or remove members (STS-1s) of a VCAT group is simple and provides the benefit of not requiring a 50 ms service interruption.

The LCAS protocol uses the H4 control packet, which consists of the H4 byte of a 16-frame multiframe. The H4 control packet contains information of the member's sequence (sequence indicator, SQ#) and alignment (multiframe indicator, MFI) of a virtual concatenated group.

LCAS operates at the endpoints of the connection only, so it does not need to be implemented at the nodes where connections cross or in trunk line cards. This allows carriers to deploy LCAS in a simple manner, by installing new tributary cards. Likewise, they can scale LCAS implementations by adding more tributary cards without requiring hardware upgrades to add/drop multiplexers, for example, throughout the entire network.

One of the greatest benefits of LCAS for carriers is the capability to "reuse" bandwidth to generate more revenue and offer enhanced services that allow higher bandwidth transmission when needed. This will be a key reason for carriers to implement next-generation SONET gear, along with the potential extra revenue stream from such.

Generic Framing Procedure

Generic Framing Procedure (GFP) defines a standard encapsulation of both L2/L3 protocol data unit (PDU), client signals (GFP-F), and the mapping of block coded client signals (GFP-T). In addition, it performs multiplexing of multiple client signals into a single payload, even when they are not the same protocol. This allows MSPP users to use their TDM paths as one large pipe, in which all the protocols can take advantage of unused bandwidth. In the past, each protocol had to ride over, and had burst rates limited to, a small portion of the overall line rate, not the total line rate. Furthermore, the overbooking of large pipes is not only possible, but also manageable because GFP enables you to set traffic priority and discard eligibility.

GFP is comprised of common functions and payload-specific functions. Common functions are those shared by all payloads; payload-specific functions are different depending on the payload type. These are the two payload modes:

  • Transparent mode Uses block codeoriented adaptation to transport constant bit rate traffic and low-latency traffic

  • Frame mode Transports PDU payloads, including Ethernet and PPP

GFP is a complete mapping protocol that can be used to map data packets as well as SAN block traffic. These are not just two sets of protocolsthey are two different market segments. Deploying GFP will further a provider's capability to leverage the existing infrastructure.

QoS

A key feature of multilayer switched Ethernet is QoS. QoS is a means of prioritizing traffic based on its class, thereby allowing latency-sensitive data to take priority over non-latency-sensitive data (as in voice traffic over e-mail traffic), as shown in Figures 3-31 and 3-32.

Figure 3-31. QoS Flow Process


Figure 3-32. QoS Process Showing an Ethernet Frame Flow Around a Resilient Packet Ring





Building Multiservice Transport Networks
Building Multiservice Transport Networks
ISBN: 1587052202
EAN: 2147483647
Year: 2004
Pages: 140

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net