IPsec Access


Many enterprise managers decide to trust a provider's private VPN to securely transport their data, but they do not trust access networks that are open to any residence gaining network access. A common solution is to protect remote-access users with IPsec encryption over untrusted access networks. The enterprise manager must then decide whether the enterprise or provider network will manage the IPsec infrastructure.

With the release of the Cisco remote access-to-MPLS network solution, providers can offer a managed service whereby the provider manages all the IPsec infrastructure and delivers remote-access users to the corporation's VPN VRF.

The advantage here is that the access medium does not matter. Typically any access medium can be used as long as the user has connectivity to the Internet (or some other means of connecting it to the provider's network-based IPsec termination devices) and the provider-supplied VPN can be reached.

The alternative, of course, is to have a separate Internet connection to the enterprise and to manage the IPsec infrastructure from within the enterprise.

IPsec tunnels need not only be terminated into Layer 3 (L3) VPN VRFs for VPN remote access. There is no reason why IPsec tunnels cannot be terminated into Layer 2 (L2) VPN instances also.

Figure 9-6 shows the basic architectural components of this solution.

Figure 9-6. IPsec-to-MPLS Access Architecture


The goal is to take the IPsec tunnels connecting remote users (and possibly remote sites, too) into the VPN. The primary security services provided by IPsec are authentication, data integrity, and data confidentiality. Typically, only encryption or authentication are available. The most common deployments use encryption, which requires hardware acceleration within the IPsec terminating device to avoid performance issues.

An IPsec tunnel may directly encapsulate data traffic that is being sent between the two IPsec nodes, or it may encrypt all data flowing through other kinds of tunnels, such as L2TP, generic routing encapsulation (GRE), and IP-in-IP.

IPsec operates in two modes: transport and tunnel.

Transport mode protects the payload of an IP datagram. It is implemented by the insertion of a security protocol header (authentication header [AH] and/or Encapsulating Security Payload [ESP]) between the original IP datagram's header and its payload. Then, the appropriate cryptographic functions are performed. The "next protocol" in the IP header will be either AH or ESP, as appropriate. Transport mode can be set only for packets sourced by and destined for the IPsec endpoints (for example, an L2TP or GRE tunnel), meaning that the original IP header is preserved.

Tunnel mode fully encapsulates the original datagram with a new IP header, along with the appropriate AH and/or ESP headers. The original datagram is fully protected, including the original source and destination IP addresses.

Note

An IPsec tunnel, even one set up for tunnel mode, is not the same thing as a Cisco IOS tunnel interface. In Cisco IOS, IPsec is treated as a sequential function in the context of the physical and/or tunnel interface, conceptually similar to access control list (ACL) processing. A crypto map specifies the desired handling of packetswhether IPsec should protect them. Each packet that is sent out an interface is checked against the crypto map. The crypto map determines if the packet is to be protected, how to protect it, and the peer crypto endpoint.


Because IPsec packets are targeted directly at the IPsec endpoint, any of these packets targeted at the router that specifies a next protocol field of AH or ESP are sent to IPsec. The decapsulated packet is then handled as a regular IP packet.

During initial Internet Security Association and Key Management Protocol (ISAKMP) key exchange, extended authentication (XAUTH) extensions to ISAKMP may be used to allow the use of a RADIUS or other AAA server for authenticating the remote system.

ISAKMP may also use the mode configuration (MODECFG) extensions to set the IP address of the "inside" IP header when creating a tunnel mode IPsec tunnel. This address is (currently) taken from a single pool of addresses assigned to the router. This allows the client to be known to the internal network by a local address, instead of by a dynamically assigned address from a local ISP. When ISAKMP assigns an inside address to the client, it also installs a host route into the IP routing table so that the router can correctly route traffic to the interface supporting the IPsec endpoint.

Of course, should IPsec be deployed for a remote site, to bring that site into the enterprise VRF, it is expected that routing updates will be necessary between the device initiating the IPsec tunnel (the enterprise edge) and the terminating device (the provider edge device). This lets routing updates from the customer edge (CE) be tagged with the enterprise route distinguisher and imported to the provider's multiprotocol BGP (MP-BGP) and delivered to other remote sites.

IPsec does not support multicast or broadcast traffic. Therefore, transporting routing protocol updates from the enterprise to the provider edge requires a GRE overlay on the enterprise edge box. Conceivably, L2TP tunneling could be used, but this is not common. As soon as routing protocol updates are encapsulated within GRE, IPsec treats them just like any other unicast packet and encrypts them across the untrusted network. The issue of GRE tunnel creation is then decided by whether the enterprise edge box is managed by the provider or enterprise. Clearly, if the enterprise's goal is to outsource the management of the IPsec infrastructure and WAN connectivity, using a provider-managed enterprise edge box achieves the goal. This restricts the enterprise to relying on services, such as Cisco IOS upgrades for the edge device on the customer network, being operated on the provider's schedule. An upgrade schedule might need to be part of a service agreement if complete outsourcing is desired.

The challenge with this deployment scenario is for remote sites that can get access to the corporate network only through the Internet, and the enterprise wants to supply the CPE. This would require a compatible IPsec and GRE configuration to work with the provider's configuration, which would, of course, be a challenge to troubleshoot from both the enterprise and provider perspective. If the enterprise wants to manage its own GRE and IPsec implementation at the remote CPE, see the "GRE + IPsec" subsection of the "References" section for configuration examples for a deployment scenario including Network Address Translation (NAT). A simpler description is given in the next section to illustrate the use of GRE on the CPE not only for transport of nonunicast packets, but for resiliency, too.

GRE + IPsec on the CPE

IPsec is often configured in tunnel mode. However, doing so does not give you the benefits of other tunnel technologies, such as GRE and L2TP. Currently, no IPsec tunnel interfaces exist. The tunnel configuration of IPsec really just refers to the encapsulation. IPsec tunnels carry only unicast packets; they have no end-to-end interface management protocol. The approach here is to use IPsec in transport mode on top of a robust tunnel technology, such as GRE.

For VPN resilience, the remote site should be configured with two GRE tunnelsone to the primary headend VPN router and the other to the backup VPN router, as shown in Figure 9-7.

Figure 9-7. VPN Resilience over Two Tunnels


Both GRE tunnels are secured with IPsec. Each one has its own Internet Key Exchange (IKE) security association (SA) and a pair of IPsec SAs. Because GRE can carry multicast and broadcast traffic, it is possible and very desirable to configure a routing protocol for these virtual links. As soon as a routing protocol is configured, the failover mechanism comes automatically. The hello/keepalive packets sent by the routing protocol over the GRE tunnels provide a mechanism to detect loss of connectivity. In other words, if the primary GRE tunnel is lost, the remote site detect this event by the loss of the routing protocol hello packets.

It is conceded that relying on routing protocol hello packets to detect loss of connectivity can be slow, depending on the value of the timers used for hellos, but it is certainly a step up from having no detection. In the not-too-distant future, mechanisms such as Bidirectional Forwarding Detection (BFD) (http://www.ietf.org/internet-drafts/draft-katz-ward-bfd-02.txt) will enable faster detection of failed tunnels without the need to rely on routing protocol hello mechanisms.

As soon as virtual-link loss is detected, the routing protocol chooses the next-best route. Thus, Enhanced Interior Gateway Routing Protocol (EIGRP) chooses the feasible successor. In this case, the backup GRE tunnel is chosen. Because the backup GRE tunnel is already up and secured, the failover time is determined by the hello packet mechanism and the convergence time of the routing protocol.

Designing for GRE Resiliency

As just discussed, using GRE tunnels with IPsec in transport mode can provide a robust resiliency mechanism for hub-and-spoke connections. Network designers should be aware of the main issues to consider when using this feature:

  • Cisco IOS configuration manageability for the headend router

  • Overhead on the network and the router processor

  • Scalability

In the case of the enterprise's using the provider network to terminate IPsec sessions into the enterprise VRF, the headend router is managed and owned by the provider, and potentially the CPE devices, too. In the case where the enterprise chooses to have a second connection for remote-access off-net users, the headend router is owned and managed by the enterprise.

In the resiliency design discussed, the GRE tunnels have a virtual interface as part of the Cisco IOS implementation and provide many of the features associated with physical interfaces. This is good, except when the headend router connects to thousands of remote sites.

Each headend router must have at least two GREs (one primary and one backup) for each remote site that terminates there. This could mean hundreds or even thousands of interfaces to configure (five lines in the configuration file for each interface) and manage for each router. There are no specific management tools for this type of configuration except for the Resource Manager Essentials (RME) product that is part of CiscoWorks. This management is a concern. You should think it through before implementation.

The benefits of implementing GRE tunnels for routing protocol update support and resiliency do come with some cost. GRE adds 24 bytes of overhead to each packet. However, when this is added to the bytes produced in IPsec transport mode and compared to the bytes increased by IPsec in tunnel mode, the network penalty of the GRE mechanism is only 4 bytes per packet. In addition to network overhead, GRE adds overhead to the processor. Performance testing on a 7200 router with NPE-G1 has shown that 1000 GRE tunnels add 10 to 16 percent of overhead to the CPU compared to running the same traffic over a serial interface. This overhead is associated with the extra network-level encapsulation and the extra routing decisions.

However, you can expect a GRE tunnel implementation to provide better scalability and performance than IPsec tunnels alone. This is because Cisco IOS routers are optimized for making routing decisions as opposed to making decisions about which IPsec SAs need to be used at the one physical interface.

Configuring GRE Resiliency

The configuration for router A that is shown in Figure 9-7 is listed in Example 9-2. It is simple to find the meaning of each command-line configuration on Cisco.com using the search facility.

Example 9-2. Remote Office Router Configuration

crypto isakmp policy 10 authentication pre-share ! ! crypto isakmp key cisco123 address 172.18.45.1 crypto isakmp key cisco123 address 172.18.45.2 ! ! crypto IPSec transform-set one esp-des esp-md5-hmac  mode transport ! ! crypto map gre 10 IPSec-isakmp  set peer 172.18.45.1  set transform-set one  match address gre1  crypto map gre 20 IPSec-isakmp  set peer 172.18.45.2  set transform-set one  match address gre2 ! ! interface Tunnel0  ip address 10.4.1.1 255.255.255.0  tunnel source 172.18.31.1  tunnel destination 172.18.45.1  crypto map gre ! ! interface Tunnel1  ip address 10.4.2.1 255.255.255.0  tunnel source 172.18.31.1  tunnel destination 172.18.45.2  crypto map gre ! ! interface Ethernet0  ip address 10.2.1.1 255.255.255.0 ! interface Serial0  ip address 172.18.31.1 255.255.255.0  crypto map gre ! interface Serial1  no ip address  shutdown ! ip classless ip route 172.18.0.0 255.255.0.0 serial0 ip eigrp 100  network 10.0.0.0 ! ! ip access-list extended gre1  permit gre host 172.18.31.1 host 172.18.45.1 ! ip access-list extended gre2  permit gre host 172.18.31.1 host 172.18.45.2

CE-to-CE IPsec

Alternative approaches to MPLS VPN solutions by service providers have been deployed for supporting L3 VPNs. Some providers offer secure any-to-any communication by installing a CPE router that is IPsec-enabled and configuring a full mesh of IPsec tunnels between all the VPN's sites. From the enterprise perspective, this provides a service similar to MPLS VPNs, in that the enterprise edge router sees a single provider router as its IP next-hop destination for all other sites. In addition, the connectivity seen by the enterprise edge is seen as any-to-any communication.

The downside is provisioning a full mesh at the CPE. Its maintenance can result in large configuration files, and additional CPU is needed to support IPsec operations. However, all these things are the service provider's concern. They aren't management issues for the enterprise. To help scale these implementations, it has become common practice to use IPsec purely as a tunneling technology and not to encrypt data.

Where the issue of providing IPsec encryption has become an area of concern for the enterprise is when the enterprise must, for regulatory or other security issues, encrypt its own data before handing it to a third party. In fact, it can be argued that if an enterprise cares enough about security to want IPsec encryption, it should care enough to manage its own IPsec infrastructure, because handing that off to a third party does not secure the data from that third party.

Looking at the case where the enterprise decides it needs to encrypt data before handing it off from its own infrastructure, the first concern is how to set up all the SAswhether a full-mesh, partial-mesh, or hub-and-spoke topology is required. If the enterprise decides that the benefits of Layer 3 IP VPNs previously described are what it wants, but it also requires IPsec encryption, an issue needs to be addressed. One of the primary benefits of L3 VPNs is the ability to add a site to the VPN and make that a local matter between the enterprise edge and provider edge connecting to that enterprise router. The mechanics of the L3 VPN then propagate that information to where it needs to go. Any-to-any communication is provided without any configuration overhead on the part of the enterprise. If this needs to have an overlay of IPsec tunnels created for encryption, the enterprise is right back to the issue of having to decide between site-to-site topology and maintaining a full mesh of tunnels itself, which thereby negates the benefits of L3 VPN.

For these situations, a solution is required that supports dynamic creation of IPsec tunnels on demand and minimizes the amount of configuration needed to create IPsec associations on the CE device. In addition, by making the IPsec relationships on-demand and as-needed, you ease the scaling issues related to the numbers of SAs to be created.

Cisco provides a solution that meets these goalsDynamic Multipoint VPN (DMVPN). This solution can be used to support a dynamic IPsec overlay for an enterprise that requires IPsec encryption on top of a provider-provisioned L3 VPN. It also can provide simple any-to-any communication for off-net sites of its intranet. Further details appear in the next section.

DMVPN Overview

DMVPN started as a way to help scale the deployment of very large IPsec networks. The primary topology for IPsec deployments was hub and spoke. This had some advantages in that the only sites that needed a provisioning action when a new site was added were the new spoke and the hub site itself. However, this could result in unwieldy hub configurations and extra work to make dynamic routing protocols behave as desired. Also, any spoke-to-spoke traffic would likely follow a suboptimal route via the hub. Full mesh was sometimes considered because it provides optimal routing. However, it adds more restrictions to deployment. All nodes require a provisioning action to add a single node, unwieldy configurations are required on all nodes, and the many logical interfaces stress Interior Gateway Protocol (IGP) operation on spoke routers and the available memory/CPU, limiting the size of the VPN with small routers at the spokes.

DMVPN was created to resolve most of these issues. It had the following goals:

  • Reduce the preconfiguration in a spoke that is necessary to bring it into service to knowledge of a single hub site.

  • Support any-to-any IPsec relationships on an as-needed basis.

  • Automatically add new sites to the dynamic full mesh without any manual configuration of existing sites.

  • Support routers with small amounts of memory and CPU at the spoke, allowing them to participate in very large VPNs.

Achieving these goals requires autodiscovery, mapping peers to each other, and dynamic authentication. For mapping peers, you must consider mapping an IP infrastructure address to a VPN layer address. The VPN layer address can be thought of as the endpoints that exist in the CE routers at the end of each IPsec tunnel. The IP infrastructure addresses are the addresses that exist in the IP network connecting the two CE router endpoints.

There are two ways of providing the required mapping in a dynamic fashion. The first is Tunnel Endpoint Discovery (TED). TED uses a discovery probe, sent from the initiator, to determine which IPsec peer is responsible for a specific host or subnet. After the address of that peer is learned, the initiator proceeds with IKE main mode in the normal way. For TED to function, each LAN must have Internet-routable IP addresses, but this is often not the case. If a private IP address space is used, such as the 10.0.0.0 network, the TED probes cannot locate the other VPN peer across the Internet.

The other option is to use Next-Hop Resolution Protocol (NHRP), which also supports the use of multipoint GRE (mGRE) to encapsulate routing protocol traffic for transmission over the IPsec tunnel. mGRE is discussed in more detail in the next section.

Before we look at how these technologies work together to enable DMVPN functionality, it is useful to have an overall view of what happens within the DMVPN. Each DMVPN site is preconfigured with the address of a hub site. This is the only VPN-specific address or SA configuration in the CE router before it is provisioned. With this configuration, a permanent IPsec tunnel to the hub is created that acts as a routing neighbor and next-hop server (NHS). Each spoke CE router registers itself with the hub, and all control data (routing protocol exchanges) flows through encrypted tunnels to the hub. However, via the operation of NHRP, spoke CE routers can query the hub site for the address of other spoke routers belonging to the same VPN and dynamically build IPsec tunnels directly from spoke to spoke. Basically, the spoke asks the hub router for the Internet-routable IP address it needs to build a tunnel to for a specific inside address it needs to tunnel and route a packet for. As soon as this dynamic spoke-to-spoke tunnel is available, spoke-to-spoke traffic has no impact on the hub site. The spoke-to-spoke tunnel is created via the mGRE tunnel interface (one per CE). Detailed and complete device configurations can be obtained from the link listed in the "References" section for DMVPN and therefore, are not replicated here. However, the following sections briefly explain the configuration elements for completeness.

mGRE for Tunneling

The only difference between multipoint GRE and regular GRE is that the destination end of the tunnel does not check the source address. This makes mGRE operate as a multipoint-to-point technology, meaning that any source can send to a specific destination. This functionality is useful in two contexts. First, if DMVPN is being used just to simplify the configuration and deployment of a hub-and-spoke topology, or indeed for a migration phase from hub and spoke to full DMVPN, the hub site has an mGRE tunnel, whereas the spokes have point-to-point GRE. For dynamic spoke-to-spoke communication, the spokes also require their tunnel interfaces to be mGRE.

To modify a tunnel interface at the hub to use mGRE, the commands shown in Example 9-3 are entered under the tunnel interface.

Example 9-3. Configuration for mGRE

interface Tunnel0        Tunnel mode gre multipoint        Tunnel key 100000

The tunnel key needs to be the same on all the spokes that want to communicate with this hub.

NHRP for Address Resolution

Nonbroadcast multiaccess (NBMA) NHRP is specified in http://www.ietf.org/rfc/rfc2332.txt?number=2332. It is designed to resolve IP-to-NBMA address mappings for routers directly connected to an NBMA. In this case, the mesh of tunnels of an IPsec VPN exhibits behavior similar to an NBMA, such as Frame Relay. The result of NHRP operation in this case is that a shortcut route is identified for a spoke to contact a spoke directly, without needing to communicate with the hub in the data path first. However, note that data packets do flow via the hub site until the spoke-to-spoke connection becomes available.

To make NHRP operational, the hub site is configured as the NHS, and the spoke sites are configured as next-hop clients.

On the hub sites, the configuration looks like the configuration shown in Example 9-4 under the tunnel interface.

Example 9-4. NHRP Hub Configuration

ip nhrp authentication test ip nhrp map multicast dynamic ip nhrp network-id 100000 ip nhrp holdtime 360

The ip nhrp authentication test command, along with the ip nhrp network-id command and the matching tunnel key entry, are used to map the tunnel packets and the NHRP packets to the correct multipoint GRE tunnel interface. The ip nhrp map multicast dynamic command lets NHRP automatically add spoke routers to multicast NHRP mappings when spoke routers initiate the mGRE + IPsec tunnel to register unicast NHRP mappings. This allows dynamic routing protocols to work over the mGRE + IPsec tunnels between the hub and spokes. Without this command, the hub router would need to have a separate configuration line for a multicast mapping to each spoke.

For the spokes, the NHRP configuration is basically the same, with the addition of the two configuration lines shown in Example 9-5.

Example 9-5. NHRP Spoke Configuration

ip nhrp map 10.0.0.1 172.17.0.1 ip nhrp nhs 10.0.0.1

Routing Protocol Concerns

The primary issue here is that the hub is still the central point of collection and distribution for control data, such as routing protocol updates. It is important to configure the hub to not identify itself as the IP next hop for the routes it receives from spokes and sends out to other spokes. This is achieved for EIGRP with the configuration command no ip next-hop-self eigrp 1 that is inserted in the tunnel interface's configuration.

This has the effect of the hub advertising routes with the original IP next hop from the update received from the spoke again.

In addition, the no ip split-horizon eigrp 1 command must be entered to allow an EIGRP hub to send received updates out the same tunnel interface and update spokes with each other's routes.

For Routing Information Protocol (RIP), you must turn off split horizon on the mGRE tunnel interface on the hub. Otherwise, no routes learned via that tunnel interface will be advertised back out. The no ip split-horizon command under the mGRE tunnel interface section achieves this behavior for RIP. No other changes are necessary. RIP automatically uses the original IP next hop on routes that it advertises back out the same interface where it learned these routes.

Open Shortest Path First (OSPF) has no split-horizon rules because it is a link-state protocol. However, adjustments are necessary in terms of network type and support for multicast or unicast hello transmission.

Given that the mGRE tunnel interface being configured is a multipoint interface, the normal OSPF network type is multipoint. However, this causes OSPF to add host routes to the routing table on the spoke routers. These host routes cause packets destined for networks behind other spoke routers to be forwarded via the hub, rather than forwarded directly to the other spoke. The resolution is to configure the ip ospf network broadcast command under the tunnel interface.

You also need to make sure that the hub router will be the designated router (DR) for the IPsec + mGRE network. You do this by setting the OSPF priority to greater than 1 on the hub (because 1 is the default priority). However, a better solution is often to reduce all the spokes to priority 0, because this prevents the spokes from ever becoming a designated router.

Hub: ip ospf priority 2

Spoke: ip ospf priority 0

Given the two-level hierarchy of VPN address and infrastructure address, resolved together in this case by the NHRP, the routing tables for both the hub and spokes are worth examining.

In the routing table for a DMVPN system, the infrastructure addresses (those that identify the physical interfaces on the public infrastructure) become known via the physical interface that connects the spoke to the public infrastructure. Other addresses are learned via tunnel interfaces. All destinations appear reachable by the single multipoint GRE tunnel interface. The multipoint GRE tunnel interface does not actually go anywhere by itself. It can be thought of as a gateway to access the destinations that IPsec protects. Each spoke creates SAs as needed with the hub site for routing information exchange, and other spoke sites for passing traffic on an as-needed basis.

IPsec Profiles for Data Protection

A full explanation of IPsec operation is not given here. However, references to publicly available documentation on Cisco.com are given at the end of this chapter. In addition, the book IPSec VPN Design by Vijay Bollapragada, Mohamed Khalid, and Scott Wainner (ISBN 1587051117, published by Cisco Press) is an excellent reference for this protocol. However, having said that, a brief description is given here of the configuration relevant to IPsec when using the DMVPN solution.

The DMVPN solution directly addresses the following issues with respect to the method of implementing static IPsec for traditional CPE-to-CPE VPN applications:

  • To define what traffic is interesting to IPsec and therefore what IPsec encrypts for transport over a VPN connection, ACLs are implemented. So, in the case where a new network is added to a site located behind an IPsec VPN CPE router, that router's ACL configuration must change on both the hub-and-spoke routers to encrypt traffic from that new network or subnetwork. If the CPE is a managed service, the enterprise needs to coordinate with the provider to make this happen, a time-consuming and operationally expensive exercise.

  • With large hub-and-spoke networks, the configuration file on the hub can become unwieldy, leading to long boot times and complex troubleshooting procedures. Typically, 200 to 300 sites create a hub configuration of several thousand lines. This is not only troublesome to navigate, but it consumes significant memory resources.

  • Should spoke-to-spoke communication be required, such as in voice over IP (VoIP) applications, either inefficient routing via the hub site or an increasing configuration file on the spoke sites to communicate with all other spokes is required. This drives up the size of the CPE router and hence the cost of the overall solution.

Let's take a look at what is configured in the more traditional hub-and-spoke or mesh IPsec VPNs and illustrate how this is dramatically simplified with DMVPN.

Looking at a standard IPsec VPN configured on CPE devices, the following elements are configured on a hub-and-spoke router, as shown in Example 9-6. The only addition on the hub router is that it has a crypto map for each of the peers established.

Example 9-6. IPsec VPN Configuration

crypto isakmp policy 10   authentication pre-share  crypto isakmp key cx3H456 address 0.0.0.0  !  crypto IPSecIPSec transform-set trans1 esp-md5-hmac   mode transport  !  crypto map map1 local-address Ethernet0  crypto map map1 10 IPSecIPSec-isakmp   set peer 180.20.1.1   set transform-set trans1   match address 101 ! ! interface Ethernet0   ip address 172.17.0.1 255.255.255.0   crypto map map1 ! ! access-list 101 permit gre host 192.17.0.1 host 180.20.1.1

The crypto isakmp policy 10 command creates an IKE policy with sequence 10. When a router peers with another IKE device, it negotiates an IKE transform set. During this negotiation, it checks the peer transform sets according to this sequence number until a match is found. In this case, the 0.0.0.0 indicates that the same key is used with multiple destinations. The authentication pre-share command dictates that the transform set use preshared keys for authentication. This is a less scalable solution than using certificates, but it is simpler in terms of single device configuration and is used here for that simplicity.

The crypto isakmp key command identifies the preshared key value for the peer in question. The crypto IPSec transform-set command defines an acceptable combination of security protocols and encryptions to use. The crypto map entries identify the peer, the transform set to use with that peer, and the ACL that defines which traffic is interesting and needs to be encrypted. This map is then applied to an interface with another crypto map entry.

The ACL defined only needs to match the GRE tunnel IP packets. No matter how the networks change at either end, the GRE IP tunnel packets will not change, so this ACL need not change.

Note

When using Cisco IOS Software versions earlier than 12.2(13)T, you must apply the crypto map map1 configuration command to both the GRE tunnel interfaces and the physical interface. With Cisco IOS version 12.2(13)T and later, you apply the crypto map map1 configuration command to only the physical interface (in this case, Ethernet0).


With DMVPN implemented, the configuration for both hub and spoke sites transfers to having the global crypto entries, plus a single tunnel specification with all the NHRP and tunnel definition elements.

Summary of DMVPN Operation

The following is an overall description of what happens in the DMVPN setup, which is illustrated in Figure 9-8.

Figure 9-8. Dynamic Multipoint VPN Architecture


In this case, a PC (192.168.1.25) at site A wants to connect to the web server at 192.168.2.37 that is behind the spoke router at site B. The router at site A consults its routing table and determines that the 192.168.2.0 network (obtained by whatever routing protocol is in effect within the corporation) can be reached via an IP next hop of 10.0.0.2 via the tunnel0 interface. This is the mGRE tunnel interface that gives access to VPN destinations. In this case, you are configuring dynamic spoke-to-spoke communication. The router at site A then looks at the NHRP mapping table and finds that an entry for 10.0.0.2 does not exist. The next step therefore is to request a mapping from the NHS. The NHS resolves 10.0.0.2 to a public address of 158.200.2.181 and sends that information to the router at site A. On receipt, this information is stored in the NHRP table in site A's router. This event triggers an IPsec tunnel to be created from site A's public address to 158.200.2.181. Now that a tunnel has been built, traffic can pass from site A to site B through the newly created IPsec tunnel.

However, what about return traffic from site B to site A? When the web server wants to send traffic back to the PC, the same steps are necessary to form a mapping from site B to site A, with the slight modification that when site B has the NHRP mapping for site A, the response can be sent directly to site A because a tunnel already exists. After a programmable time, the NHRP entries age out, causing the IPsec tunnel to be torn down.

DMVPN improves the scalability of IPsec deployments. However, not all issues are resolved when you look at it from the provider's point of view that offers this as a managed service. For the managed service offering, a Cisco solution called Dynamic Group VPNs offloads much of the IPsec work from the enterprise network to the provider network in a more scalable fashion.

The Impact of Transporting Multiservice Traffic over IPsec

One issue not to overlook when planning the use of IPsec or GRE tunnel implementations to support remote-access connectivity is the overhead that these technologies place on router systems. Overhead is defined in the following terms:

  • Processing overhead for encapsulation, encryption, and fragmentation

  • Bandwidth consumed by header layering

Considering first the encapsulation processing overhead, the effect is somewhat dependent on router capabilities. For IPsec encryption, hardware acceleration modules generally are available to mitigate the negative effects on CPU utilization of encrypting payload traffic. The effects of GRE encapsulation and fragmentation, however, are not so easily overcome. Whether GRE is process-switched or switched via Cisco Express Forwarding (CEF) depends on the platform and release implemented. Clearly, process switching can add overhead and delay to router processing, depending on router load.

Another performance effect to consider is fragmentation of oversized frames, primarily due to the addition of an IPsec and GRE header to a maximally sized Ethernet frame. When a packet is nearly the size of the maximum transmission unit (MTU) of the outbound link of the router performing encryption, and it is encapsulated with IPsec headers, it exceeds the MTU of the outbound interface. This causes packet fragmentation after encryption, which means the decrypting router at the other end of the IPsec tunnel has to reassemble in the process path. Prefragmentation for IPsec VPNs increases the decrypting router's performance by allowing it to operate in the high-performance CEF path instead of the process path.

Prefragmentation for IPsec VPNs lets the encrypting router calculate what the IPsec encapsulated frame size will be from information obtained from the transform set (such as ESP header or not, AH header or not). If the result of this calculation finds that the packet will exceed the MTU of the output interface, the packet is fragmented before encryption. The advantage is that this avoids process-level reassembly before decryption, which helps improve decryption performance and overall IPsec traffic throughput.

There are some restrictions on the use of this feature. They are specified in detail in the "Prefragmentation" section at the end of this chapter. This feature is on by default.

The effects of header layering have an impact on the amount of useable service that such an encrypted link can support. This effect is most pronounced when considering a G.729 codec, which produces 12-byte voice payloads every 20 milliseconds. Figure 9-9 shows how the layering of headers increases the number of bytes that need to be transported from the original 20 bytes of voice, all the way up to 140 bytes, for each voice packet, when all the headers are layered on.

Figure 9-9. Effects of Encapsulating G.729 Voice Packets into IPsec Encrypted Tunnels


This has the following effects when you're considering bandwidth provisioning for service:

  • The size of the packet for calculating the serialization delay component of the overall delay budget for a voice service

  • The actual WAN bandwidth consumed for a given profile of LAN traffic

  • The number of voice calls that can be supported on a given link

The increased latency imposed by encryption is not a concern if hardware modules are used for encryption. However, the interaction of IPsec with quality of service (QoS) mechanisms is worthy of further discussion.

There is a belief that as soon as a packet has been encrypted with IPsec, the original QoS marking contained in the Differentiated Services Code Point (DSCP) bits is lost, because the original IP header is encrypted in tunnel mode. However, this is untrue. IPsec's standard operation is to copy the value of the DSCP bits from the original IP header to the new IPsec tunnel header so that externally viewable QoS markings are preserved. Also, concerns have been raised that IPsec orders packets in sequence, whereas QoS mechanisms by their nature reorder packets depending on priority. However, this is not a concern, because QoS mechanisms do not reorder packets within any given flow. Although it is true that a voice packet may be put at the head of a queue of data packets by a priority queue implementation, for example, the sequence of IPsec from one host to another within a single flow is unaffected by this operation. A deeper examination of this topic is available at http://www.cisco.com/application/pdf/en/us/guest/netsol/ns241/c649/ccmigration_09186a00801ea79c.pdf.

Returning to the three concerns of delay budget, bandwidth consumption, and number of calls supported, the following is known.

The access links from the CE to the PE tend to have the greatest impact on delay budget consumption when designing packet networks to support voice. Because the CE-to-PE links tend to be lower-bandwidth, serialization delay is the most significant factor in packet delay (assuming that a priority queue is implemented and that the voice packet always rises to the top of the outbound queue). Serialization clearly depends on the clock speed of the access line and the size of the packet. Clearly, when calculating serialization delay for voice packets, the full size of the encrypted packet (in this case, 140 bytes, as shown in Figure 9-9) must be used rather than 20 bytes or some other figure.

Regarding bandwidth consumption, different bandwidths are consumed for each call, depending on the codec used. Some of the most popular voice codecs are described in the following list. It compares the bandwidth consumed by these codecs per voice call over traditional WANs to IPsec and IPsec + GRE encapsulation:

  • G.729a at 33 packets per second Over Frame Relay, one call consumes 21 kbps. With IPsec, 33 kbps is consumed. Adding GRE takes this codec to 39 kbps.

  • G.729a at 50 packets per second Over Frame Relay, one call consumes 28 kbps. With IPsec, 47 kbps is consumed. Adding GRE takes this codec to 56 kbps.

  • G.711 at 33 packets per second Over Frame Relay, one call consumes 77 kbps. With IPsec, 80 kbps is consumed. Adding GRE takes this codec to 86 kbps.

  • G.711 at 50 packets per second Over Frame Relay, one call consumes 84 kbps. With IPsec, 104 kbps is consumed. Adding GRE takes this codec to 114 kbps.

The last question remaining is how many calls can be supported on a given link size. This problem has many parts. First, how much of the access link bandwidth can be dedicated to the priority queue? A general guideline that has been used in many different network deployments is 33 percent. This figure is not absolute; it is just a safe figure for a number of different scenarios. The concern with making the priority queue bandwidth too high is that having a packet be in the priority means less and less in terms of guaranteeing latency. Clearly, the deeper the priority queue can get, the greater the potential latency a priority queue packet can experience. So for simplicity, taking the 33 percent recommendation as a rule of thumb, it is possible to make some assumptions about call usage and derive a bandwidth figure required per site depending on the number of users. Limiting the priority queue to 33 percent ensures a more predictable response from the data applications using the link.

First, consider a site with 10 users. If you assume a 3:1 ratio for simultaneous calls, there are not likely to be more than three active voice calls at any time. Using a G.729 codec at 50 pps gives you 3 56 kbps = 168 kbps for voice. If that represents 33 percent of the link bandwidth, the link needs to be 512 kbps, with the remaining bandwidth used for data applications.

A similar analysis can be performed for a site with, say, 35 employees. In this case, for a call ratio of 4:1, you get nine simultaneous calls. With G.729, the calls consume 504 kbps, which is approximately 33 percent of a T1 link.

Split Tunneling in IPsec

This refers to a feature within the VPN client residing on the hosts of remote-access users. It can also be found on hardware VPN clients that are used to encrypt all site-to-site traffic in DMVPN setups. Split tunneling is there to direct traffic destined for corporate hosts over the secured IPsec tunnels, and traffic for the public Internet, directly to the public Internet. This presupposes that the remote user or remote site is using the Internet in the first place to gain connectivity to the corporate network. Clearly, the advantage is that traffic destined for the public Internet does not have to go through the corporate Internet connection and consume bandwidth on that shared resource.

The disadvantage of split tunneling is a potential opening of security holes to the end hosts. The concern is that a host can be compromised by an attacker on the Internet, and when that host then gains access to the corporate network, the attacker can also do so.

A corporation's decision of whether to use split tunneling can go either way, depending on the security procedures that can be implemented within that corporation. It's inadvisable to merely implement split tunneling without considering how its potential security issues should be addressed. A reference for configuring split tunneling is given at the end of this chapter. Also, the "Case Study Selections" section identifies the security procedures that our Acme enterprise implemented to allow the use of split tunneling in some cases.




Selecting MPLS VPN Services
Selecting MPLS VPN Services
ISBN: 1587051915
EAN: 2147483647
Year: 2004
Pages: 136

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net