Having looked at different options for authenticating clients, we now attempt to pull together the various pieces with a design example. The network shown in Figure 11-11 is migrating its core network to use Layer 3 virtualization with a combination of generic routing encapsulation (GRE) and Multiprotocol Label Switching (MPLS) tunnels. The distribution switches function as provider edges (PEs) and map incoming VLAN traffic to appropriate VRFs. Security is centralized in the data center, where traffic sent to and from the Internet and internal servers is cleansed. Figure 11-11. Virtualized Access LayerBecause of a spate of security failures, the decision has been made to secure network access for all users and to enforce complete separation between employees and contractors, who need access only to lab networks. Thus, there are two different groups allowed on the network:
Let's look at how to satisfy each of the design requirements in turn. The first decision is to use 802.1x for both network access security and dynamic group assignment. When available, Layer 2 authentication is easier to use than Layer 3. Layer 2 is well instrumented, secure, and transparent to almost all other features needed in the wiring closet. The employee authorization is implemented with user profiles in a central RADIUS server. The right choice of authentication method is beyond the scope of our discussion here, but assume the use of EAP-TTLS. Every client, therefore, must have an 802.1x supplicant that supports this method. When authenticated, we have a choice of how to configure VLANs. The first option is to bind access ports using RADIUS allocation. The access switch ports are configured as shown previously with the aaa authorization network default group radius command for dynamic VLAN allocation. The trunks between access and distribution carry dot1q traffic; and all employee VLANs are mapped to a VRF on the distribution layer using separate, routed logical interfaces for each physical trunk connection so that all traffic between closets is routed, not switched. A consequence of using AAA to authorize user VLAN access is that it becomes cumbersome to allocate different VLAN names for every wiring closet because that involves using a different profile for any given user, based on where the user is sitting. It is possible to do this because the Access-Request comes with the IP address of the switch, and RADIUS servers allow scripts to be triggered to complete or change profiles dynamically. There is another way. Because all VLANs are terminated at the PE on a Layer 3 interface and all inter-wiring closet traffic is routed (so the IP addressing plan has to support this more on that later). As long as VLAN identifiers have local significance on the PE (in other words, the same VLAN ID can be used on two different interfaces), we can have the same VLAN name/ID for all employees, the wiring closets remain separate broadcast domains in accordance with best-practice LAN design. Local VLAN significance is not universally available. Therefore, an alternative design is to use static configuration on the access switches so that each wiring closet uses a different VLAN ID (which is probably how they are set up already, so this approach eases migration). The PE terminates all employee VLANs into the same VRF. Remember that we can override the per-port configuration with 802.1x for dynamic, guest, and auth-fail VLANs, so we preserve the ability to have dynamic VLAN allocation should the need arise. Employee DHCP requests are forwarded using the switch relay-agent function in the distribution layer to a DHCP server in the data center. The relay agent sets DHCP option 82 to communicate the physical port information to the DHCP server. The server allocates addresses from different IP subnets to each wiring closet by using the relay-agent information to select the right scope. In this way, we can guarantee Layer 3 routing between users connected to different wiring closets, no matter how the VLANs are named. Employee peer-to-peer traffic is switched as early as possible, either at Layer 2 in the wiring closet for hosts on the same physical switch or Layer 3 in the distribution. The VRF route-target and routing configuration on the distribution switches forces all Internet- or server-bound packets through the data center's security center. See Chapter 6, "Infrastructure Segmentation Architectures: Practice," for information about how to configure the PE to do this. Figure 11-12 shows the inter wiring-closet traffic patterns for the Employee group. Figure 11-12. Permitted Traffic Flows for the Employee GroupThe second design requirement, guest access, is implemented through a combination of two methods. First, ports in public-access areas are locked down using port security to restrict access to the guest VLAN no matter who connects to them. Second, on all other ports, a guest user group is configured using the per-interface command dot1x guest-vlan VLAN ID. The guest VLAN is again terminated on routed interfaces into a guest VRF at the distribution layer PE. Unlike the Employee group, the guest VLAN is a pure hub-and-spoke topology. All traffic must go to a data center PE. For this reason, a point-to-point tunnel, such as GRE, is a logical choice of transport protocol. See Chapter 4, "A Virtualization Technologies Primer: Theory," for more information about how to use route targets to do this in MPLS. The data center PE forces all traffic through a virtual firewall context (also discussed in Chapter 4) and then on to the Internet. The firewall rules prevent all traffic from being routed back into the corporate network, including to the VPN aggregator. This satisfies the requirement of preventing employees from side-stepping 802.1x by building a VPN connection over the guest network and connecting back to the secure enterprise network (and, yes, people do this). Figure 11-13 shows the allowed traffic flow for the Guest group. Obviously, there is configuration required on the firewall and data center PE, and on the distribution layer PEs, to do this correctly. Figure 11-13. Permitted Layer 3 Traffic Flows for the Guest GroupGuest VLAN members need a DHCP infrastructure. Because the core network is virtualized, there is no requirement to use different IP addresses from the employee network. Note In general, each virtual network needs its own policy infrastructure servers (DHCP, AAA, and so on), as this example shows. In other words, the network services must also be virtualized. No surprise there. There areas usualseveral options to implement this. You can deploy VRF-aware services that can maintain address pools and profiles for separate VRFs and support overlapping IP addresses. For example, Cisco has an elaborate per-VRF AAA feature set originally developed for service providers to do just this for RADIUS. The other option is to deploy virtualized servers with dedicated DHCP and RADIUS instances for each virtual network. Chapter 8 of this book reviewed how to connect virtual servers to VLANS and VRFs. As discussed earlier in the chapter, the access switch moves a port into the guest VLAN if it does not receive EAPOL-Start or EAP replies after a configurable interval. Because clients could time out their DHCP requests before the switch moves their port to the guest VLAN, they might need to manually renew their request for an IP address. Expect support calls from users whose host software self-allocates the 169.254.0.0/16 subnet when they do not hear back from a server. Recall from the requirements that employees who fail to authenticate must also have Internet access. We have the choice between setting up a separate auth-fail VLAN, which is a valid approach, or extending the Guest group semantics to include the tried-but-failed users. Cisco IOS allows the second option, so that is what we deploy in this design, using the dot1x guest-vlan supplicant global command.
The access layer of a virtualized network is atypical in that we have not introduced a new forwarding paradigm. As discussed at the start of the chapter, the Layer 2 access layer is already virtualized. The design examples concentrated on using 802.1x to map policy and group definitions to VLANs, which are mapped to VRF structures at the distribution. Note how, because there is nothing "new" in the forwarding path, there is no need to use VRF-aware features. For the sake of completeness, the following is a quick review of the other major access layer features required for our design:
Layer 3 AccessWe have spent little time discussing Layer 3 wiring closet deployments as we have focused on the more traditional Layer 2 approach. This is because we currently recommend staying with a Layer 2 access layer when deploying virtual networks. To understand why, consider the following reasons:
For all these reasons, Cisco recommends using a point-to-point tunnel between access and distribution when Layer 3 is required or already deployed in the wiring closet. To reiterate, we do not suggest using MPLS in the wiring closet. Figure 11-14 shows the topology. Each group on the access switch is terminated into a VRF, which has a dedicated GRE tunnel to a corresponding VRF on the distribution switch, whichthen and only thenlabel switches packets to other PEs. Figure 11-14. Layer 3 AccessIn Chapter 5, "Infrastructure Segmentation Architectures: Theory," we discussed hopto-hop architectures. The Layer 3 access network in Figure 11-14 uses a Layer 3 hop-tohop architecture (instead of the more traditional Layer 2 versionVLANs) to transport packets to the RFC 2547 PE in the distribution layer. It is not mandatory to terminate all the tunnels at the distribution. You can route GRE across the core to, say, a remediation server in the data center for network access control (NAC). Many such combinations are possible. |