Virtualizing the Access Layer


Having looked at different options for authenticating clients, we now attempt to pull together the various pieces with a design example.

The network shown in Figure 11-11 is migrating its core network to use Layer 3 virtualization with a combination of generic routing encapsulation (GRE) and Multiprotocol Label Switching (MPLS) tunnels. The distribution switches function as provider edges (PEs) and map incoming VLAN traffic to appropriate VRFs. Security is centralized in the data center, where traffic sent to and from the Internet and internal servers is cleansed.

Figure 11-11. Virtualized Access Layer


Because of a spate of security failures, the decision has been made to secure network access for all users and to enforce complete separation between employees and contractors, who need access only to lab networks.

Thus, there are two different groups allowed on the network:

  • Authenticated employees Employees who are allowed unrestricted access to the network. In case of authentication failure, employees should be allowed Internet access to update their host software and attempt to connect.

  • Non-employees (guests) Guests who are expected in shared spaces such as lobby areas, conferences rooms, and cafeterias. The network design must also allow for guests who connect to ports in employee areas. Guests are allowed Internet access, but their traffic must go through the cleansing device in the data center shown in Figure 11-11. Employees are not allowed to access corporate resources by running a VPN connection from a guest port back into the corporate network (the data center houses a VPN aggregator, which is not shown, to allow remote-employee connection).

Let's look at how to satisfy each of the design requirements in turn. The first decision is to use 802.1x for both network access security and dynamic group assignment. When available, Layer 2 authentication is easier to use than Layer 3. Layer 2 is well instrumented, secure, and transparent to almost all other features needed in the wiring closet.

The employee authorization is implemented with user profiles in a central RADIUS server. The right choice of authentication method is beyond the scope of our discussion here, but assume the use of EAP-TTLS. Every client, therefore, must have an 802.1x supplicant that supports this method. When authenticated, we have a choice of how to configure VLANs.

The first option is to bind access ports using RADIUS allocation. The access switch ports are configured as shown previously with the aaa authorization network default group radius command for dynamic VLAN allocation. The trunks between access and distribution carry dot1q traffic; and all employee VLANs are mapped to a VRF on the distribution layer using separate, routed logical interfaces for each physical trunk connection so that all traffic between closets is routed, not switched.

A consequence of using AAA to authorize user VLAN access is that it becomes cumbersome to allocate different VLAN names for every wiring closet because that involves using a different profile for any given user, based on where the user is sitting. It is possible to do this because the Access-Request comes with the IP address of the switch, and RADIUS servers allow scripts to be triggered to complete or change profiles dynamically.

There is another way. Because all VLANs are terminated at the PE on a Layer 3 interface and all inter-wiring closet traffic is routed (so the IP addressing plan has to support this more on that later). As long as VLAN identifiers have local significance on the PE (in other words, the same VLAN ID can be used on two different interfaces), we can have the same VLAN name/ID for all employees, the wiring closets remain separate broadcast domains in accordance with best-practice LAN design.

Local VLAN significance is not universally available. Therefore, an alternative design is to use static configuration on the access switches so that each wiring closet uses a different VLAN ID (which is probably how they are set up already, so this approach eases migration). The PE terminates all employee VLANs into the same VRF. Remember that we can override the per-port configuration with 802.1x for dynamic, guest, and auth-fail VLANs, so we preserve the ability to have dynamic VLAN allocation should the need arise.

Employee DHCP requests are forwarded using the switch relay-agent function in the distribution layer to a DHCP server in the data center. The relay agent sets DHCP option 82 to communicate the physical port information to the DHCP server. The server allocates addresses from different IP subnets to each wiring closet by using the relay-agent information to select the right scope. In this way, we can guarantee Layer 3 routing between users connected to different wiring closets, no matter how the VLANs are named.

Employee peer-to-peer traffic is switched as early as possible, either at Layer 2 in the wiring closet for hosts on the same physical switch or Layer 3 in the distribution. The VRF route-target and routing configuration on the distribution switches forces all Internet- or server-bound packets through the data center's security center. See Chapter 6, "Infrastructure Segmentation Architectures: Practice," for information about how to configure the PE to do this. Figure 11-12 shows the inter wiring-closet traffic patterns for the Employee group.

Figure 11-12. Permitted Traffic Flows for the Employee Group


The second design requirement, guest access, is implemented through a combination of two methods. First, ports in public-access areas are locked down using port security to restrict access to the guest VLAN no matter who connects to them.

Second, on all other ports, a guest user group is configured using the per-interface command dot1x guest-vlan VLAN ID. The guest VLAN is again terminated on routed interfaces into a guest VRF at the distribution layer PE. Unlike the Employee group, the guest VLAN is a pure hub-and-spoke topology. All traffic must go to a data center PE. For this reason, a point-to-point tunnel, such as GRE, is a logical choice of transport protocol. See Chapter 4, "A Virtualization Technologies Primer: Theory," for more information about how to use route targets to do this in MPLS.

The data center PE forces all traffic through a virtual firewall context (also discussed in Chapter 4) and then on to the Internet. The firewall rules prevent all traffic from being routed back into the corporate network, including to the VPN aggregator. This satisfies the requirement of preventing employees from side-stepping 802.1x by building a VPN connection over the guest network and connecting back to the secure enterprise network (and, yes, people do this).

Figure 11-13 shows the allowed traffic flow for the Guest group. Obviously, there is configuration required on the firewall and data center PE, and on the distribution layer PEs, to do this correctly.

Figure 11-13. Permitted Layer 3 Traffic Flows for the Guest Group


Guest VLAN members need a DHCP infrastructure. Because the core network is virtualized, there is no requirement to use different IP addresses from the employee network.

Note

In general, each virtual network needs its own policy infrastructure servers (DHCP, AAA, and so on), as this example shows. In other words, the network services must also be virtualized. No surprise there. There areas usualseveral options to implement this. You can deploy VRF-aware services that can maintain address pools and profiles for separate VRFs and support overlapping IP addresses. For example, Cisco has an elaborate per-VRF AAA feature set originally developed for service providers to do just this for RADIUS. The other option is to deploy virtualized servers with dedicated DHCP and RADIUS instances for each virtual network. Chapter 8 of this book reviewed how to connect virtual servers to VLANS and VRFs.


As discussed earlier in the chapter, the access switch moves a port into the guest VLAN if it does not receive EAPOL-Start or EAP replies after a configurable interval. Because clients could time out their DHCP requests before the switch moves their port to the guest VLAN, they might need to manually renew their request for an IP address. Expect support calls from users whose host software self-allocates the 169.254.0.0/16 subnet when they do not hear back from a server.

Recall from the requirements that employees who fail to authenticate must also have Internet access. We have the choice between setting up a separate auth-fail VLAN, which is a valid approach, or extending the Guest group semantics to include the tried-but-failed users. Cisco IOS allows the second option, so that is what we deploy in this design, using the dot1x guest-vlan supplicant global command.

Dynamic Groups

We have concentrated on static group definitions: employees or guests. In other words, the port to group binding can be dynamic, but group membership is static. Network reachability policy is similarly straightforward: authenticated users get everything; others get the Internet. Other deployment scenarios require more granular policy.

For example, on-site contractors in a restricted environment might be allowed to communicate with only a subset of the employee community. To implement this, we would like to set up a Virtual Network for this category of users. There are a couple of design options:

  • Layer 2 Dynamically bind users in this group to a contractor VLAN. The distribution layer maps the VLAN to a MPLS VPN to allow routed communication between peers, but not with any other Virtual Network, creating a closed user group. The obvious disadvantage is that employees find themselves quarantined from the rest of the network. To solve this issue, create alternate 802.1x user profiles. For example, user/password allows full access to the network. A second profile, called user.restricted/password, has a Tunnel-Private-Group-ID attribute that forces the port into the contractor VLAN. The employees just need to log in with the right username for this to work.

  • Layer 3 Create a separate VLAN and subnet for the contractors. On the PE, create an extranet VRF using MPLS route-targets that allows communication between the contractor subnet and one, or several, of the employee subnets. This approach makes certain assumptions about subnet allocation that will not apply to all cases. However, it is a good solution to allow both employees and contractors access to shared servers. For example, if the policy is that employees are allowed everywhere; but contractors have access only to their local LAN and a central server (say to upload reports), bind the contractor VLAN dynamically using RADIUS and map the VLAN to a VRF at the distribution as before. Then either download a per-user access list that restricts where contractors can go, or use route-targets on the PE to leak the route to the authorized server address and nowhere else. Employee configuration is unchanged.


The access layer of a virtualized network is atypical in that we have not introduced a new forwarding paradigm. As discussed at the start of the chapter, the Layer 2 access layer is already virtualized. The design examples concentrated on using 802.1x to map policy and group definitions to VLANs, which are mapped to VRF structures at the distribution. Note how, because there is nothing "new" in the forwarding path, there is no need to use VRF-aware features.

For the sake of completeness, the following is a quick review of the other major access layer features required for our design:

  • Security As previously discussed, dynamic ARP inspection, DHCP snooping, IP source guard, and port security should all be deployed in this scenario. Just because users are authenticated does not mean that they will behave themselves. RADIUS accounting is a valuable source of post-mortem data because it links IP address with user port with traffic statistics. You can combine this with NetFlow data from the core network to analyze who is sending what to whom and when.

  • QoS As discussed in Chapter 10, QoS internals are not virtualized. Use separate traffic classes for voice, regardless of the group it belongs to. You can choose to provide lower QoS guarantees to the Guest group, in which case those packets should be classified accordingly on the access ports. Remember, you can download per-user configuration in an 802.1x RADIUS profile. Finally, you can also use the scavenger class to aggressively discard excess-burst traffic on any port. Chapter 10 has the details.

  • Other access layer features This includes link aggregation, IGMP snooping, inline power, and so on.

Layer 3 Access

We have spent little time discussing Layer 3 wiring closet deployments as we have focused on the more traditional Layer 2 approach. This is because we currently recommend staying with a Layer 2 access layer when deploying virtual networks. To understand why, consider the following reasons:

  • Capital cost To support virtualization, the access switches must have VRFs, which represents an incremental hardware cost beyond what is required for a fully functional Layer 2 solution. Some access switches already support VRFs.

  • Feature support All the features deployed at the access layer must become VRF aware, especially authorization. Whether 802.1x allows dynamic VRF binding is implementation dependent, but if it does not, it becomes nontrivial to authenticate and authorize. Because all traffic on a port is already bound to a VRF, the 802.1x server must be reachable within that VRF. Distribution switches already have the required per-VRF features (and do not require 802.1x integration) and, though individually more expensive, there are fewer of them. Of course, Cisco and others do offer solutions with sophisticated access layer hardware that have the required per-VRF feature set for routing, security, QoS, and multicast.

  • Operational cost Turning on IP in the wiring closet multiples the number of routers in the network by a significant factor. Routing deployment will be more complex as a result, with more opportunity for error. However, removing spanning tree is not without benefit, and routed access limits broadcast domains to single ports, which mitigates against some categories of spoofing attack.

  • MPLS VPN In the case of a network using MPLS VPN, the logical architecture places the PE function in the wiring closet. This is not a good idea. RFC 2547 requires a Border Gateway Protocol (BGP) implementation beyond the scope of what you would expect to find on a wiring closet switch. Furthermore, each PE needs to be configured, so increasing their number creates more work and more opportunities for error (without the corresponding gain of removing spanning treerouting took care of that already). Finally, PEs maintain a full mesh of LSP tunnels. If you increase the number of PEs, you must deploy BGP route reflectors to scale.

  • VRF instantiation Just as Layer 2 access has dynamic VLAN configuration, so would a Layer 3 solution have dynamic VRF binding. In this case, the user's RADIUS profile points to a VRF name rather than a VLAN ID. When successfully authorized, the user's port would be placed into the VRF. Sounds simple. However, the VRF must already be defined on the access switch with corresponding route distinguisher, route target, and BGP configuration (see Chapter 4 for more details). Also, to allow true user mobility, all possible VRFs must be preconfigured on every switch because any user might connect to any port. If the number of network-wide VRFs is small, this is not a major issue. Otherwise, it consumes switch resources (memory) for VRFs that might never have any attached interfaces. This is a well-understood problem for dialup and DSL access. The solution is to terminate subscriber sessions on a Layer 2 device and tunnel aggregate traffic to an external PE. In LAN terms, this means that Layer 2 access scales better.

For all these reasons, Cisco recommends using a point-to-point tunnel between access and distribution when Layer 3 is required or already deployed in the wiring closet. To reiterate, we do not suggest using MPLS in the wiring closet. Figure 11-14 shows the topology. Each group on the access switch is terminated into a VRF, which has a dedicated GRE tunnel to a corresponding VRF on the distribution switch, whichthen and only thenlabel switches packets to other PEs.

Figure 11-14. Layer 3 Access


In Chapter 5, "Infrastructure Segmentation Architectures: Theory," we discussed hopto-hop architectures. The Layer 3 access network in Figure 11-14 uses a Layer 3 hop-tohop architecture (instead of the more traditional Layer 2 versionVLANs) to transport packets to the RFC 2547 PE in the distribution layer. It is not mandatory to terminate all the tunnels at the distribution. You can route GRE across the core to, say, a remediation server in the data center for network access control (NAC). Many such combinations are possible.




Network Virtualization
Network Virtualization
ISBN: 1587052482
EAN: 2147483647
Year: 2006
Pages: 128

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net