Chapter 4, "A Virtualization Technologies Primer: Theory," broke down virtualization into link and device components. The preceding explanations discussed how link virtualization QoS can be provided, at least for networks using MPLS. However, there is some bad news at the device level. There is no concept of a virtual QoS mechanism on enterprise routers or switches. In our opinion, this work will have to happen; until then, the only option is to use per-VPN policieshierarchical if needed, as discussed in the rest of this chapter. One Policy per GroupA VN introduces an additional layer of hierarchy in the form of groups or segments, which gather together users or applications that have the same policy requirements. Designing QoS into a VN involves identifying whether each segment has a single or multiple QoS policies. We look at both cases. The network shown in Figure 10-1 uses Cisco QoS design recommendations and needs to introduce virtual segments for two user groups. The requirement is to maintain the same policy after the transition. Figure 10-1. VN QoSNonhierarchicalTwo major parts of the network must be discussed. The LAN, or campus (composed of access, distribution, and core layers), with the traditional three-tier design, and the WAN connections to remote sites. The general rules in either case are as follows:
In the LAN, end hosts are deemed trusted or untrusted. In the first case, DSCP settings are maintained at the access layer. In the second, the switch sets QoS values (for example, EF for voice, AF31 for data, and so forth). As any good text on QoS will discuss in detail, you must consider many implementation dependencies, such as the number of egress queues and policer granularity. A low-cost, Layer 2 switch willat bestbe able to set 802.1p values, so the distribution layer would need to copy these settings to the IP layer. Although it is recommended to police untrusted traffic as close to the network edge as possible, because of the high interface speeds, congestion is uncommon in a LAN environment. If it arises, it is often cheaper to add more bandwidth than to fiddle with queues (you should still deploy queues to support VoIP). The WAN is more complex because speed mismatches occur between connections, between sites, and with the LAN itself. Furthermore, if the enterprise is using a commercial IP VPN service, fewer classes will be available for inter-site traffic, so traffic must be re-marked with different DSCP values. Also, WAN routers are typically confronted with congestion, so you need to use queuing and discard and, possibly, shaping. Note It can be useful to understand the models used when traffic is carried by a service provider between sites. Two nonexclusive models are as follows:
We will consider two alternative designs: hop-to-hop (h2h) tunnels and MPLS VPN.
Note how we mapped traffic in a network segment to a DiffServ class and enforced QoS using standard mechanisms and design rules. All the link virtualization protocols already have the hooks needed to set DSCP (or 802.1p) values on incoming or outgoing traffic. This solves many, but not all problems. A user in a particular segment may be infected with a virus and inadvertently send a flood of Internet Control Message Protocol (ICMP) messages to switches. The switch could conceivably become so busy processing ICMP packets that it could no longer guarantee the service level required by other VNs. This naive example (which ignores network admission control [NAC], control-plane policing [CoPP], scavenger classall available to protect switches from this scenario) illustrates how traffic in one VN could adversely affect another. Logical routers (covered in Chapter 4) would not have this issue because dedicated hardware resources could be allocated to different VNs. Why not use MPLS TE or DS-TE? As discussed previously, MPLS TE addresses link protection, link congestion, or load balancing. None of these help here. Although excess traffic on a segment may result in link congestion, TE is not the appropriate tool. DS-TE, on the other hand, allows class-based bandwidth allocation and could limit the virus-related traffic from one VN from starving other VN. An alternative solution is to use network security mechanisms and aggressive packet discard at the edge to detect and dispose of excess traffic on any VN. Multiple Policies per GroupHierarchical QoSThe simple case described in the previous section might not be enough for certain situations where, for example, separate VNs exist for relatively large groups, each of whom want to run VoIP but who want to maintain traffic separation for all their applications. Figure 10-3 shows a sample network in which a single IT department creates VNs for two different engineering groups after a merger. The IT department does not control access switch ports, which are shared between groups, albeit with traffic in different VLANs. Figure 10-3. VN QoSHierarchicalIn this scenario, policy guarantees must be provided to different application flows within flows of user traffic, which requires hierarchical QoS mechanisms. The term hierarchical refers to the use of nested policy mechanisms. Hierarchical QoS allows for a parent group policy, and then child policies that belong to that group. For example, a department gets x Mbps worth of bandwidth (the parent policy), but within that there are different (child) classes of service for different applications. When confronted with hierarchical QoS requirements, the recommended approach is to enforce per-VN differentiation at the edges of the network. Despite the more-complex policies, the usual design rules still apply when using hierarchy (for example, the combined link bandwidth allocated to voice should still remain under 33 percent, and 25 percent of link bandwidth should remain unreserved). Note The 25 percent rule is a basic design recommendation. Only 75 percent of the total bandwidth of any link should be allocated to major applications, leaving the rest for control traffic, link-layer encapsulation overhead, and unpredictable bursts of traffic. Returning to our example, regardless of the multiple groups in which it runs, voice still needs to be marked as EF so that it is correctly queued in case of congestion in the core (here you have proof of the fundamentally democratic nature of networking: all voice is considered equal). As a consequence of this design approach, the network administrator is responsible for making sure that each VN, which may use different paths or devices, must be able to support low-latency queuing (LLQ) and, on slow WAN links, packet-interleave techniques such as Link Fragmentation and Interleaving (LFI), compressed Real Time Protocol (cRTP), and so on. Our scenario requires that bandwidth of one of the engineering departments be limited to 100 Mbps (obviously the acquired company). Security policy requires that no single user exceed 10 Mbps. This policy should be enforced at the edge of the network in Figure 10-3 using hierarchical policing. The distribution switches would police the aggregate traffic in VLAN 100 to 100 Mbps, with a child policy that further limits any traffic flow within VLAN 100 to be less than 10 Mbps. The switches can still maintain a single set of queues for all traffic to guarantee priority for voice (policing is hierarchical, but queuing is not). You might want to deploy hierarchical queuing on a tunnel interface. Remember that tunnel interfaces have no underlying hardware and so never congest and, therefore, will never queue traffic. If your network design needs a different behavior, you can use hierarchical policing and classes on both the tunnel and underlying physical interfaces. The reference section in Appendix C lists a document for further reading. As usual, different products have different capabilities in this regard. Figure 10-4 shows one example: the ME3750 (from the 3750 Metro Ethernet white paper at Cisco.com). Figure 10-4. Hierarchical Queuing Scheme on the Catalyst ME3750Hardware support for hierarchical queuing, typically found in higher-end equipment, is costly, so be sure that a genuine requirement exists before specifying it. Note We owe a debt of gratitude to K. P. Mishra of Cisco Systems for his expert explanations of hierarchical service policies. |