Guaranteed Bandwidth


DiffServ-aware TE (DS-TE) was developed (see RFCs 3564 and 4124) to allow MPLS-TE to be traffic-class aware. DS-TE supports the concepts of preemption and explicit overbooking.

The terminology of RFC 4124 refers to Class-Types (CT). A CT is defined as the set of aggregated traffic flows belonging to one or more classes that are governed by the same bandwidth constraints (BC). Link bandwidth is allocated on a CT basis. BC is a generalized reference to the unit of bandwidth allocation, which can be percentage of link speed, absolute bit rate, latency requirements, percentage of free bandwidth, and so on. DS-TE implementations must be able to enforce the BCs for all the different CTs used on the network.

The relationship of different CTs to each other and to overall available bandwidth is defined in a bandwidth-constraint model. At least two models are currently defined (the normative RFCs, RFC 4125 and 4217 provide formal definitions). We explain them using examples (which are borrowed from the RFCs):

  • Maximum allocation model (MAM) Bandwidth is segmented into separate pools. Each CT is allocated its own pool of bandwidth. Figure B-7 shows the MAM model.

    Figure B-7. Maximum Allocation Model (from RFC 4125)

    For example, on a link of 100 units of bandwidth where 3 CTs are used, the network administrator might then configure BC0 = 20, BC1 = 50, BC2 = 30 such that

    - All LSPs supporting traffic trunks from CT2 use no more than 30 (for instance, voice <= 30).

    - All LSPs supporting traffic trunks from CT1 use no more than 50 (for instance, premium data <= 50).

    - All LSPs supporting traffic trunks from CT0 use no more than 20 (for instance, best effort <= 20).

  • Russian dolls model (RDM) Bandwidth is allocated from ever-increasing pools, where the bandwidth of each successive pool is inclusive of all the previous pools. Figure B-8 shows RDM graphically, which is easier to understand.

    Figure B-8. Russian Dolls Model (from RFC 4127)

    For illustration purposes, on a link of 100 units of bandwidth where 3 CTs are used, the network administrator might then configure BC0 = 100, BC1 = 80, BC2 = 60 such that

    - All LSPs supporting traffic trunks from CT2 use no more than 60 (for instance, voice <= 60).

    - All LSPs supporting traffic trunks from CT1 or CT2 use no more than 80 (for instance, voice + premium data <= 80).

    - All LSPs supporting traffic trunks from CT0 or CT1 or CT2 use no more than 100 (for instance, voice + premium data + best effort <= 100).

Each model has its own advantages. MAM, for example, has the property of class isolation: The bandwidth of any given class is independent of any other class, and this requires no preemption. However, MAM can be wasteful, because bandwidth allocated to CT1 cannot be given to CT0 if there is no CT1 traffic. RDM, on the other hand, does not have this last problem because CTs share bandwidth. However, RDM requires preemption for bandwidth constraints to be guaranteed.

DS-TE is implemented with IGP extensions that advertise the bandwidth per CT available on a link. TE constrained routing calculation is run on a per-CT basis and RSVP extensions allow reservation requests to also be made per CT.DS-TE, just like regular TE, is a control-plane reservation mechanism. You still need to use queuing and discard mechanisms to enforce the traffic classes on the data plane. To briefly review the implementation steps, consider a network requiring two classes: voice and data, where the voice class should be limited to 40 percent of the overall bandwidth, but QoS must be guaranteed. All other bandwidth is available for data.

  • In DS-TE terms, there are two CTs: voice and data. Each CT should use its own DSCP value (for example, voice can be mapped to Expedited Forwarding [EF] and data to the Assured Forwarding [AF] classes).

  • Use an appropriate queuing and discard scheme for each CT (for example, priority queuing for voice and class-based weighted fair queuing [CBWFQ] for data). Every interface on every hop used by the TE tunnels must also be configured with bandwidth pool values to effectively divide the bandwidth with a 60/40 ratio.




Network Virtualization
Network Virtualization
ISBN: 1587052482
EAN: 2147483647
Year: 2006
Pages: 128

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net