Redirecting Application Requests


Cisco content edge delivery provides redirection capabilities, enabling transparent delivery of cached content to your clients. Transparent redirection is useful in caching environments because routers or content switches can direct requests for content to a cache, which then can field the request on behalf of your unknowing clients. You can configure transparent redirection using either of the following methods:

  • Web Cache Control Protocol

  • Content Switch redirection

Introducing Web Cache Control Protocol

Web Cache Control Protocol (WCCP) is a Cisco proprietary protocol aimed at optimizing content delivery between clients and CEs, by redirecting client requests to appropriate CEs transparently. WCCP does not inspect the URL or HTTP request to classify traffic for redirection. Otherwise, delayed binding would be required by WCCP to complete the TCP connection on behalf of the server to allow the client to send the HTTP request to the WCCP router. Instead, the WCCP router inspects packets on incoming or outgoing interfaces and matches them against service groups that you configure for WCCP inspection. WCCP defines service groups by port numbersWCCP looks no further into the payload of the packet than the TCP/UDP packet header.

To configure WCCP on your router, first you must specify the version that you wish to enable, using the command

 ip wccp version [1 | 2] 


WCCP version 2 includes some enhanced features over version 1, including support for multiple redundant routers, improved security, Layer 2 redirection, and redirection of applications other than HTTP over TCP port 80.

To configure a service group on your router, you can use the command

 ip wccp {web-cache | service-number | outbound-acl-check} [group-address multicast- address] [redirect-list access-list] [group-list access-list] [password password [0 | 7]] 


With WCCP version 2, you can configure either well-known or user-defined groups, by specifying a service number within the ip wccp command. You can use well-known service groups for inspecting protocols whose ports are predefined (for example, in an RFC) and are therefore known by both the CE and WCCP router without manually configuring the desired ports. For example, as the name indicates, you can use the well-known service group called web-cache (service 0) for caching HTTP port 80 objects. In contrast, you must configure the ports of the user-defined service groups (services 9097) on the CEthe CEs advertise the ports you configure to the router using WCCP signaling. Table 13-1 lists the available WCCP service groups.

Table 13-1. Available WCCP Service Groups

Service Group Name

Service Group Number

Description

Default Ports

Web-cache

0

HTTP protocol

80

DNS

53

DNS protocol

53

FTP-Native

60

FTP

21

HTTPS-Native

70

HTTPS protocol

443

RTSP

80

Real Time Streaming Protocol (RTSP)

554

MMST

81

Microsoft's MMS protocol over TCP

1755

MMSU

82

Microsoft's MMS protocol over User Diagram Protocol (UDP)

1755

WMT-RTSPU

83

Microsoft's implementation of the RTSP over UDP

5005

CIFS-cache

89

Supports caching Common Internet File System (CIFS) traffic on ports 139 and 445

139 and 445

User-Defined

90-97

A custom protocol that you want to redirect

You can define up to eight ports per user-defined service.

Custom-Web-Cache

98

You can define nonstandard ports for redirecting HTTP traffic (without using up one of your "user-defined" service groups)

You can define up to eight ports for the custom web cache service.

Reverse-Proxy

99

The well-known reverse proxy protocol

80


Note

The difference between the reverse-proxy service and the web-cache service is that web-cache service uses the destination IP address to determine which hash bucket to use when distributing client requests across multiple CEs. Whereas, reverse-proxy hashes the source IP address to select an available hash bucket. You will learn about hash buckets later in this Chapter.


Other parameters that you can specify in the ip wccp command are

  • For outbound WCCP, the outbound-acl-check keywords enable the router to process any outbound ACLs before the WCCP outbound configuration classifies the packets.

  • The group-address keyword enables you to configure a multicast address that the WCCPv2 cluster routers and CEs communicate with.

  • The redirect-list keyword enables you to configure an ACL to control the traffic that the router will redirect within the service group that you configure.

  • The group-list keyword enables you to configure an ACL to restrict the CEs that the router can learn about.

  • To configure a Message Digest 5 (MD5) password for the WCCP routers to authenticate messages received from the service group, you can use the password keyword.

To enable WCCP on your CE, you can specify the WCCP version with the wccp version command:

 wccp version 2 


Then you must specify the WCCP routers that the CE will communicate with, by using the command

 wccp router-list list-num IP-Address1 .. IP-AddressN 


Then, you associate the service group that you configured on your router with the command

 wccp [web-cache | service-number service-group] router-list-number list-num [l2-redirect] [mask-assign] 


If you configured a user-defined service group on your router, you must associate the ports on your CE that you want the WCCP router to redirect to the CE, with the command

 wccp port-list list-num port-num 


For example, to enable the standard web-cache service group on your CE, you can use the commands

 wccp version 2 wccp router-list 1 10.1.20.2 wccp web-cache router-list-num 1 


Redirecting Traffic at Layer 2 and Layer 3 with Web Cache Control Protocol

Once you enable WCCP on your router with the ip wccp command, you can enable Layer 2 or 3 traffic redirection on the router's interfaces. With Layer 2 redirection, the WCCP router rewrites the MAC address in the Ethernet frame to redirect traffic transparently to a different destination IP address (that is, the CE) than specified by the destination IP address in the IP packet header. Not all router platforms support Layer 2 redirectionthe Catalyst 6500 Multilayer Switching Feature Card (MSFC)/Policy Feature Card (PFC) contains hardware acceleration for WCCP Layer 2 redirection. This feature is negotiated between directly connected Cisco Content Engines and the MSFC/PFC. The Ethernet frame is forwarded to the transparent cache, which in turn processes the request. To configure Layer 2 redirection, you can configure your CEs with the l2-redirect keyword in the wccp service-number router-list-number command that you learned previouslyno configuration is required on the MSFC/PFC.

WCCPv2 uses generic routing encapsulated (GRE) tunneling with Layer 3 application redirection, which leaves the original IP packet intact. The WCCP router encapsulates the packet with an additional GRE packet header containing the source IP as the WCCP router and the destination IP as the CE device. WCCPv2 uses GRE so that the WCCP router can communicate directly with the CE over IP while retaining the original packets of the client. This is beneficial if the CE is any number of Layer 3 hops away from the WCCP router.

Note

Layer 2 redirection provides a more efficient redirection mechanism by avoiding additional packet encapsulation and processing at Layer 3. However, your CEs must be visible at Layer 2 from the MSFC.


With Layer 3 redirection, the client sends a request for content to an origin server, by first sending a SYN segment to the origin server. The packet contains the source IP of the client and the destination of the origin server. In the case of redirecting web traffic, the WCCP router inspects the TCP header within the IP packet for port 80 in the destination port field and matches the "web-cache" service group. The WCCP router then chooses an available CE for the request and encapsulates the IP packet in a GRE packet with its interface as the source address and the selected CE's IP address as the destination. When the CE receives the packet, it decapsulates the GRE header and responds directly to the client with a TCP SYN/ACK segment, by spoofing the source address of the origin server. This way, the client is unaware that the TCP response was sent by the CE. The client then sends the TCP ACK packet to complete the TCP handshake and application layer content request to the origin server, which the WCCP router forwards to the CE over the GRE tunnel. The CE then decides whether or not the content is cached and proceeds with the transaction.

Input Redirection Vs. Output Redirection

You can enable your WCCP router to redirect packets as they either arrive or leave a router interface, using the interface configuration command:

 ip wccp [web-cache | [service-number service-group redirect [in | out] 


You can configure either input or output redirection with the following commands:

  • Input Redirection With input redirection, the router matches incoming traffic against a WCCP service group that you configure and redirects the traffic immediately to the CE. The WCCP router does not perform a routing table lookup before matching traffic, even when using Layer 3 redirection. You must configure input redirection if you decide to use Layer 2 redirection.

    For example, to configure input redirection for the web-cache service on an interface, you can use the command

     ip wccp web-cache redirect in 

    Note

    WCCP classifies only the first packet of a flow and redirects the packet to the CE. All subsequent traffic of the flow is Cisco Express Forwarding (CEF)-switched to the CE.


  • Output Redirection With output redirection, the router routes traffic from its incoming interfaces to the outgoing interface that you configure for outgoing redirection, and then matches the traffic against its configured WCCP service groups. For example, to configure output redirection for the web-cache service on an interface, you can use the interface configuration command:

 ip wccp web-cache redirect out 


Output redirection is less efficient than input redirection because

- The router performs a CEF trie lookup to determine the next-hop address based on the destination IP address of the incoming traffic. Avoiding these additional CEF lookups for Layer 3 redirection can be beneficial in high traffic environments.

- WCCP inspects traffic from all incoming interfaces that are routed to the outgoing interface. WCCP uses the fastest switching path that you configure (for example, CEF switching) for processing traffic on outgoing interfaces. However, WCCP still imposes unnecessary overhead to the switching path when applying policies to all your outgoing trafficespecially to traffic from incoming interfaces that will never contain WCCP traffic. Fortunately, you can exclude incoming interface traffic from being classified by WCCP on outgoing interfaces by using ip wccp redirect exclude in on incoming interfaces.

WCCP Load Distribution Using Hash Buckets

To scale your caching environment, you can install multiple CEs and distribute requests across them using WCCP. The WCCP router distributes the incoming requests using hash buckets or address mask assignments.

Recall the IP address hash load-balancing method that you learned about previously in Chapter 10, "Exploring Server Load Balancing." To select an available real server, the content switch computes a numeric hash value based on the source or destination IP address in the IP header of the request. The content switch then divides the hashed value by the number of reals N, with the remainder giving the resultant real to forward the request to. This mechanism works fine for an SLB with content replicated across your real servers. However, you want to avoid replicating content across available CEs. For example, using simple destination IP address load balancing, when a cache fails, the hash values are divided by N-1. The remainder of the division then results in a cache, different from the one receiving previous requests for the same content, receiving the new request, causing a sudden redistribution of content across all of the CEs. The redistribution of content across CEs results in major bursts of cached content misses by your clients.

The solution to this problem is the use of a series of data structures called hash bucketsyou learned previously how CEF switching uses hash buckets in Chapter 3, "Introducing Switching, Routing, and Address Translation." When the first request arrives at the WCCP router, the router computes a hash on the various fields in the IP and TCP headers, resulting in a range of predetermined values, usually a pool of CEs that is relatively larger than reasonable. For illustration purposes, assume that the hash function produces a 3-bit value. Therefore, a maximum of eight hash buckets are available. In actuality, WCCP uses 256 hash bucketsrecall from Chapter 3 that the CEF load-sharing table uses 16 hash buckets (with a 4-bit hash function).

WCCP intelligently assigns the available CEs to the hash buckets based on CE load and availability. CEs with a lower load are assigned to more buckets than CEs with higher loads. For example, say WCCP assigns the hash buckets according to Figure 13-1. The hash function can then produce a hash value of 5 for an originating TCP SYN segment of a flow. Because bucket number 5 is assigned to CE 2, the router sends the request to that CE for this and all subsequent packets in the flow. This way, only CE 2 maintains a copy of the requested file, thus avoiding file duplication across the pool of caches.

Figure 13-1. Using WCCP Hash Buckets


Note

The designated CE sends WCCP policy information for the selected service group to the WCCP router(s) on behalf of the cache cluster (WCCP elects the CE with the lowest IP address as the designated CE). The policy information includes the load-balancing method (hash buckets or mask assignment) and the mask/hash table. The designated CE distributes the buckets among the available CEs, based on the load of the CEs, before advertising the hash or mask bucket table to the WCCP router(s).


Figure 13-1 illustrates how WCCP uses hash buckets for CE assignments.

Recovering from a CE Failure

When a CE fails, WCCP reassigns to the other available CEs the buckets to which the failed CE was assignedonly the files from the failed CEs are redistributed among the remaining CEs, as Figure 13-2 illustrates. All other CEs remain assigned to their respective hash buckets, thus avoiding the system-wide file redistribution that occurs when a CE fails when using straight hashing. When a CE fails using hash buckets, requests only for the content held in the failed CE experience cache misses.

Figure 13-2. Recovering from a CE Failure Using Hash Buckets


Adding a New CE

Unlike with CE failures, when you add a new cache to the pool, the cached objects in the entire pool require redistribution among the available caches. The hash bucket assignments need to be adjusted to fit the new CE in the available buckets. For example, Figure 13-3 adds a fourth CE to the cluster. If WCCP assigns buckets 6 and 7 to the new CE, the other buckets need to be reorganized to ensure that the load is distributed across the CEs. Fortunately, in this example, bucket 5 is still assigned to CE 3, so existing files for the illustrated flow are not affected.

Figure 13-3. Inserting a New Cache


To help reduce the likelihood of a flood of cache misses, the new CE attempts to satisfy incoming requests by querying the other CEs in the cluster for the content before sending the request to the origin server to retrieve the original content. This technique is called cache healing. Cache healing enables a CE that receives a request for an object that was previously served by another CE, referred to as a "near miss," to query the other CEs for the file. You can enable healing mode on your CE by using the following command:

 http cluster {heal-port number | http-port number | max-delay seconds | misses number} 


By using the http cluster command, you can specify the port from which the CEs listen to content requests from other CEs (the default is 14333). You can also specify the actual port for the request by using the http-port keyword (the default is port 80). The max-delay keyword specifies the number of seconds that the healing client (that is, the newly added CE) waits before it sends the request to a healing server (the default is zero seconds). You can configure the total number of misses before the CE disables healing mode, by using the misses keyword.

To ensure that existing flows to working CEs do not disconnect when adding a new cache to the cluster, you should enable WCCPv2 flow protection by using the following command on your CEs:

 wccp flow-redirect enable 


To ensure that a newly added CE is not overwhelmed with connections when it is first added to the cluster, you can enable WCCPv2 slow start with the command

 wccp slow-start enable 


Slow start ensures that the CE will be assigned buckets at smaller increments during the CEs bootup process, enabling the CE to completely boot before assigning it a full load.

WCCP Hot Spot Handling

When WCCP hashes requests for a frequently requested file or small group of files to the same bucket, the CEs assigned to the bucket become overloaded. Because WCCP version 1 is traditionally assigned a single CE per bucket, WCCP version 1 sends all requests for the requested file(s) to the same CE. This situation is called a hot spot, and it can cause undesirable effects in a pool of CEs, such as CE overloading. WCCP version 2 automatically detects hot spots in the hash buckets and will assign more than one CE to the bucket, thus distributing the load across the available CEs.

WCCP CE Load Shedding

You may find that CEs sometimes become overloaded within a CE cluster, even with the WCCP hash bucket hot spot handling feature. If so, you can configure the CE overload bypass feature. With the bypass feature, when a CE becomes overloaded, it instructs the router to bypass the bucket that the router used to send the current request to the CEthe router will redirect the request to another CE. If the CE load does not decrease when it receives the next request from the WCCP router (on a different bucket), it instructs the router to bypass the bucket that the router used to send the current request to, and so on until the CE load decreases. To configure overload bypass, use the command

 bypass load {enable | in-interval seconds | out-interval seconds | time-interval        minutes} 


The command bypass load enable enables the bypass feature. The command bypass load in-interval indicates the interval between bypassed buckets coming back online (default of 60 seconds) after the CE's load starts to decrease. The command bypass load out-interval specifies the time during which the CE will bypass another bucket after the previous bypass (default of four seconds). Last, the command bypass load time-interval indicates the time during which the CE will wait before enabling a bypassed bucket.

WCCP Load Distribution Using Mask Assignment

Address mask assignment load distribution is similar to hash assignment, except that, instead of hashing the IP address or ports, WCCP applies a mask to the IP addresses to determine a value to use as an index into a "mask" table. WCCP applies the masks you specify to the incoming packet header fields, by performing a bitwise-AND operation using the mask among the source IP address, destination IP address, source TCP/UDP port, and destination TCP/UDP port fields. The result of the mask is an index into a table of 127 entries. The entries contain pointers to the available caches. You can configure the masks yourself on your caches, using a total of seven bits within the four fields mentioned previously. Table 13-2 gives the default values for the four available masksthe CE applies the value given in the Mask column to the field specified in the Field column.

Table 13-2. Default WCCP Version 2 Masks

Field

Mask (in Hex)

Mask (in Binary)

Source IP address

0000

0000000000000000

Destination IP address

1741

0001011101000001

Source Port

0000

0000000000000000

Destination Port

0000

0000000000000000


To enable mask assignment, you must specify the mask-assign keyword in the wccp service-number router-list-num command that you learned about previously. Notice that the masks in Table 13-2 use only six of the possible seven bits (only on the destination IP address field), resulting in only 64 possible entries. To manually adjust the mask values on your CE, you can use the command

 wccp service-number mask {[dst-ip-mask hex_num] [dst-port-mask port_hex_num] [src-ip-mask hex_num] [src-port-mask port_hex_num]} 


Table 13-3 gives alternate masks across three of the four fields, using all seven available bits in total, thus giving 127 possible values.

Table 13-3. Sample WCCP Version 2 Masks

Field

Mask (in Hex)

Mask (in Binary)

Source IP address

2480

0010010010000000

Destination IP address

0208

0000001000001000

Source Port

0003

0000000000000011

Destination Port

0000

0000000000000000


To configure the values in Table 13-3 for the web-cache service group, you can use the command

 wccp web-cache mask src-ip-mask 0x2480 dst-ip-mask 0x0208 src-port-mask 0x0003 


For illustration purposes, Figure 13-4 masks on the destination IP address using 0x0460, which gives eight possible mask table entries. Figure 13-4 gives the resulting mask assignments for a sample request using the destination mask 0x0460. This mask in binary is 0b10010000001, which gives a result of 0b101 when masked against the destination IP address 192.168.10.1. Using 0b101 (that is, 7 in decimal) as an index into the table results in selecting CE 4.

Figure 13-4. Sample Mask Bucket Assignment


You can apply the same rules to mask buckets that you learned previously with hashbucketsthat is, how hash buckets deal with failed caches, bucket overload, and the addition of a new cache. Refer to sections "Recovering from a CE Failure," "Adding a New CE," "WCCP Hot Spot Handling," and "WCCP CE Load Shedding" for more information.

Note

Like Layer 2 redirection, the mask assignment method requires special hardware and is currently supported on the Catalyst 6500 MSFC2/PFC2 and above.


Layer 47 Content Switch Redirection

Content switches can provide robust redirection capabilities using Layer 47 redirection. Because content switches perform delayed binding, they can inspect the HTTP headers to obtain load-balancing criteria. This enables you to use the following features:

  • You can use HTTP header load balancing to match cache virtual servers and manually configure certain types of files for redirection.

  • The content switch can hash URLs to ensure that requests for the same file are forwarded to the same caches.

  • Intelligent in-band health probes and real-time load calculations are available for you to configure.

  • Content switches do not redirect content that is marked as non-cacheable ("no-cache") in the HTTP "Cache-Control:" header. The content switch instead forwards the request directly to the origin server.

With content switch redirection, you create virtual servers for traffic flowing in the direction of the cacheable requests. You can then create policies within the virtual servers to match the types of traffic you want cachedyou should use Extension Qualifier Lists (EQL) for specifying the file types to be cached. All other types of flows in the same direction traverse the content switch per normal routing policies.

As with WCCP, content switches can perform redirection that is transparent to your clients. That is, clients send requests directly to the origin server, but when the content switch matches a request to a virtual server for a CE farm, the content switch can forward the request to the selected cache, using either NAT- or dispatch-mode forwarding. The cache processes the request and initiates a connection to the origin server on a cache-miss using its IP address as the source of the request, or by spoofing the client's IP address.

Note

Later you will learn how to configure content switch redirection in the "Request Redirection Topologies" section of this Chapter.


Content Switch Load Distribution

You can use one of the following CSS-balancing methods to distribute requests across your pool of CEs:

  • Source IP address hashing Sticks a client to the same CE for every request from the client. Source IP address hashing may cause duplication of content across your pool of CEs because, if two different sources request the same content, the CSS directs them to different CEs based on their source IP address. Use the content rule command balance srcip to configure source IP address hashing.

  • Destination IP address hashing Directs all client requests to the same destination IP address to the same CEs. Destination IP address hashing is useful in forward caching environments and some reverse caching environments in which a large number of servers are being cached. The method ensures that the CSS does not replicate content across caches. Use the content rule command balance destip to configure source IP address hashing.

  • Domain or URL balancing With these methods of balancing, the CSS divides the 26 letters of the alphabet evenly between the caches. With domain balancing, the CSS uses the first four letters after the first dot (".") of the domain in the "Host:" header of the GET request to determine which CE to select. To configure domain balancing, use the balance domain content rule command.

    With URL balancing, the CSS takes the first four letters in the "Host:" header after the URL that you configure in your content rule (for example, if you configure "\support\" the CSS would use the first four letters immediately after the last "\"). To configure domain balancing, use the balance url content rule command. Both of these methods may cause duplication of content across your pool of caches.

  • Domain or URL Hashing With these methods of balancing, the CSS uses an exclusive-OR (XOR) hash function to reduce the domain name or URL into an index to identify an available cache. To configure URL hashing, use the balance urlhash content rule command. To configure domain hashing, use the balance domainhash content rule command.

Recall from Chapter 10 that the CSS distributes all GET requests within an HTTP-persistent TCP connection to the same server (unless the CSS matches the request to a different virtual server that does not contain the originally selected real server). To ensure that the CSS distributes the requests evenly across the caches, you should disable persistence (using the no persistence content rule command) and enable request rebalancing (using the persistent reset remap global command).

Note

You can also use the traditional load balancing methods that you learned about in Chapter 10, such as least connections, weighted round robin, and response times, for CE load distribution.


Adding and Removing CEs When Using CSS Redirection

A drawback to CSS redirection is that, when you add a new CE manually or when the CSS adds a recovered CE to the pool, the CSS must redistribute content across the pool of CEs. To circumvent content redistribution when a failed cache recovers and comes back online, you can configure your CSS to direct all subsequent requests destined to the failed CE to the origin server instead. Once the CE comes back online, the CSS will resume sending requests to the CE. To configure the CSS to bypass a failed CE, you can use the failover bypass content rule command.

With this command, if a client establishes a new TCP connection to the CSS and sends a request for a file residing on the failed CE, the CSS redirects the client to the origin server instead. When the CE recovers, the CSS directs new clients to the recovered CE. However, in the case of existing clients that are using HTTP-persistent TCP connections, if the CE later recovers, and the client sends a subsequent request to the virtual server containing the recovered CE, the CSS bypasses the entire virtual server. This behavior is called bypass persistence. To ensure that persistent connections use virtual servers containing the recovered cache, you can disable bypass persistence with the bypass persistence disable command. If you disable bypass persistence, you must direct the CSS to either redirect or remap the persistent connection using the persistent reset global command. These commands will make sure that the CSS resumes sending requests within a HTTP-persistent TCP connection to virtual servers containing the recovered CE, including the recovered CE itself.

As an alternative to configuring failover bypass, you configure your CSS to distribute subsequent requests that were originally designed for the failed CE to the next configured CE in your content rule using the failover next command. You can also direct the CSS to distribute requests for files residing on the failed CE across the remaining CEs using the failover linear content rule command. If you do not configure a failover mechanism using the failover command, the content switch redistributes all content across your pool of CEs. If you do not configure a failover mechanism, you should consider configuring CE healing on your CEs, to avoid content redistribution.

Request Redirection Topologies

In this Chapter, you learn how you can configure your Cisco CEs to add value to networked applications. To do so, you must first learn about the three common caching topologies: proxy, forward, and reverse caching.

Proxy Caching

You can configure proxy caching by placing a CE in close proximity to your clients and explicitly configuring your client's web browser or media player to send content requests directly to the CE for all external access. Under normal circumstances, the client would formulate a content request with the IP packet's destination IP address set to the IP address of the origin server. However, if you use a proxy cache, your browser or media player would send the IP packet with the proxy cache's IP address set as the destination IP address in the IP packet. The client will establish a TCP connection directly to the proxy cache to send the application request over.

When the proxy cache receives the request, it processes the request and responds directly to the client. If the cache has a copy of the requested object (known as a cache-hit), the cache responds directly to the client with the requested object. Otherwise, the proxy cache generates an identical request, but with its own IP address as the source of the packet, and sends the request to the origin server (known as a cache-miss). The proxy caches require your browser or media player to include the requested host domain name in either the URI field or within an HTTP "Host:" header in the client's HTTP GET request. Using DNS, the proxy cache resolves the domain name, for use as the destination IP address of the packets sent to the origin server.

When the proxy receives the origin server's response, it caches a copy of the object, unless the "Cache-Control:" header prevents the object from being cached before sending it the client. As you learned in Chapter 8, "Exploring the Application Layer," the values "no-store" or "no-cache" in the "Cache-Control:" header prevent the cache from storing the object. With either a cache-hit, -miss, or near-miss, from the client browser or media player's perspective, the object appears to be coming directly from the proxy cache, which is where the client sent its request in the first place. From the end-user's perspective, the transaction appears to be occurring directly with the origin server.

Because the client's browser or media player addresses its requests with the proxy's IP address, you do not need to configure network redirection on your routers or content switches. The intermediary routers simply route the packets directly to the client per normal routing policies. However, to create a redundant array of proxy caches, you require a content switch. You can create a virtual server containing the VIP to which you point client's browsers and media players. The proxy cache requests are then load-balanced to the various caches. Figure 13-5 illustrates forward proxy caching.

Figure 13-5. Forward Proxy Caching


To configure your CSS to load balance client requests among multiple proxy caches, you can use the configuration in Example 13-1. You cannot use WCCP for proxy cache load balancing.

Example 13-1. CSS Forward Proxy Load Balancing

 service proxy-1  ip address 10.1.10.10  type proxy-cache  active service proxy-2  ip address 10.1.10.11  type proxy-cache  active eql static-files  extension gif  extension jpg  extension jpeg  extension asf  extension rm  extension qt  extension mp4  extension html  extension htm owner cisco  content proxy-vip  vip address 10.1.10.100  url "/*" eql static-files  protocol tcp  port 80  balance domainhash  failover bypass  add service proxy-1  add service proxy-2 

To enable forward proxy caching, you need to assign your proxy caches as type proxy-cache. This command tells the CSS to destination-NAT client requests to the proxy cache's IP addresses. It also prevents the CSS from matching requests originated from the proxy caches against configured virtual servers, to avoid loops between the CSS and proxy caches.

Note

The EQL in Example 13-1 is not exhaustive. Make sure that you understand what file types you should redirect to your proxy cache before enabling redirection on your CSS.


Because you must explicitly configure your client's browser with the proxy IP address, you can use proxy caching only in an enterprise environment in which you have administrative access to the clients that are requesting the content. As an alternative to manual proxy configuration, you can use dynamic proxy auto-configuration to help automate the proxy setting changes on your client's browser or media players. Dynamic PAC is a method in which the user's browser is configured with a URL to a ".pac" file that contains information on the proxy. To change all your user's proxy settings, you need only change this single file, not every browser or media player in your organization. You will learn more about how to configure PAC files in Chapter 14.

Note

The Cisco CE caches FTP, HTTP, HTTPS, MMS, and RTSP in proxy mode by defaultyou do not need to configure any special settings to cache these protocols on their standard ports.


Transparent Caching

As with proxy caches, you place transparent CEs in close proximity to the client. However, as the name indicates, transparent caching is transparent to the client program because you do not need to configure client browsers or media players with the proxy IP address. When a client sends a request, the destination of the packet remains as the origin server, but the transparent cache is programmed to accept the request nonetheless. If the object is available in the transparent CE's cache file system (CFS), the cache spoofs the origin server IP address before sending the request back to the client. If the object is not available in the cache, the transparent cache opens a separate TCP connection to the origin server with its IP address as the source and sends the request unmodified. Figure 13-6 illustrates forward transparent caching.

Figure 13-6. Forward Transparent Caching


Note

Transparent caching is also known as forward caching and forward transparent caching.


When you configure forward transparent caching using the web-cache service, the WCCP router hashes the destination IP address to select a hash bucket for determining to which CE the request should be forwarded. Because the IP address space used on the Internet is much vaster than the source IP address space that you use in your organization, WCCP can more evenly distribute content across the available CEs through use of destination IP address hashing.

To configure your CSS to redirect requests to a farm of transparent caches, you can use the configuration you learned previously in Example 13-1 with the exception of assigning the cache's services as type transparent-cache, which indicates to the CSS that it should not destination-NAT the client requests to the cache's IP address. This command also prevents the CSS from matching requests originated from the proxy caches against configured virtual servers, to avoid loops between the CSS and proxy caches.

Reverse Transparent Caching

With reverse caching, you locate your CE(s) in close proximity to your origin servers, as opposed to proxy and transparent caching, in which you locate the CE in close proximity to your clients. Figure 13-7 illustrates reverse transparent caching.

Figure 13-7. Reverse Transparent Caching


When retrieving content from an origin server, you can configure the CE to use as the source either its own IP address or the IP of the requesting clientFigure 13-7 illustrates how the CE spoofs the client IP address to source the connection. You should configure spoofing on your caches with reverse caching so that your origin server can use the client's source IP address for auditing and logging purposes. To configure the CE to spoof the client's IP address, use the command wccp spoof-client-ip enable on your CE. If you use this command on the CE, you must also configure the WCCP router to redirect the return packets from the origin server to the CE, in addition to redirecting packets from the client to the CE, as Figure 13-7 and Example 13-2 illustrate.

Example 13-2. WCCP Reverse Proxy Load Balancing with Client IP Spoofing

 WCCP on the Router ip wccp 99 ip wccp 95 interface fastethernet 0/0  ip address 192.168.10.2 255.255.255.0  ip wccp 99 redirect in interface fastethernet 0/1  ip address 10.1.20.1 255.255.255.0  ip wccp 95 redirect in interface fastethernet 0/2  ip address 10.1.10.1 255.255.255.0  ip wccp redirect exclude in WCCP on the CE wccp version 2 wccp router-list 1 10.1.10.1 wccp port-list 1 80 wccp service-number 95 router-list-num 1 port-list-num 1 application cache  hash-destination-ip match-source-port wccp reverse-proxy router-list-num 1 wccp spoof-client-ip enable 

As you learned previously, by default the WCCP router inspects the destination port of packets for the service group port (for example, port 80 for HTTP traffic). However, with IP spoofing, you must tell the WCCP router to redirect the origin server's return packets as the source port, not the destination port. For example, when the CE receives the client's HTTP requests, it establishes a TCP connection to the origin server using the client's IP address by sending a TCP SYN request to the server. The server receives the TCP SYN and responds with a TCP SYN-ACK packet to the client with port 80 as the source address. When the WCCP router receives the server's TCP SYN-ACK, it must inspect the source port to check for the port that you configured in service group 95 (that is, port 80 in this example). When the router receives return packets with the source port of 80, it redirects them to the CE. To instruct the WCCP router to inspect the source port instead of the destination port, use the match-source-port option, as Example 13-2 illustrates.

Once the router matches traffic against the service group, it performs a hash to determine the hash bucket for the request. Because the reverse-proxy service hashes incoming requests on the source IP address (that is, the client's IP address), you must configure the WCCP router to hash the server's responses on the destination IP address. You should configure this hash argument swap to make sure that the WCCP router sends the server's response back to the same CE as the client's original request. To do this, you must create another service group on the CE (that is, service group 95) and assign the destination address as the hash argument with the hash-destination-ip keyword.

Additionally, to avoid the spoofed packets originating from the CE from being classified and redirected by the WCCP router, you must configure the router interface with ip wccp redirect exclude in.

To use a CSS instead of WCCP for reverse transparent caching, you can use the configuration in Example 13-3.

Example 13-3. CSS Reverse Proxy Load Balancing Without Client IP Spoofing

 service proxy-1  ip address 10.1.10.10  type transparent-cache  no cache-bypass                                                            active service proxy-2  ip address 10.1.10.11  type transparent-cache  no cache-bypass                                                            active service web01  ip address 10.1.20.10  active service web02  ip address 10.1.20.11  active eql static-files  extension gif  extension jpg  extension jpeg  extension asf  extension rm  extension qt  extension mp4  extension html  extension htm owner cisco  content cache-miss-vip  vip address 10.1.10.101  protocol tcp  port 80  add service web01  add service web02 content transparent-vip  vip address 10.1.10.100  url "/*" eql static-files  protocol tcp  port 80  balance srcip  failover bypass  add service proxy-1  add service proxy-1 content web-vip  vip address 10.1.10.100  protocol tcp  port 80  add service web01  add service web02 

To use your CSS for redirection, you need to include the command http l4-switch enable on your CE. Additionally, in the event of a cache-miss, you must use the http proxy outgoing host command to direct all cache-miss traffic to a new VIP on the CSS. For example, the following command directs the CE to forward all requests for files that it does not serve to the IP 10.1.10.101:

 http proxy outgoing host 10.1.10.101 


Example 13-2 shows a new virtual server called "cache-miss-vip" that contains this VIP. This new virtual server in turn directs all cache misses to your web server farm. Under normal circumstances, the CSS will not match any requests originating from CEs against its configured virtual servers. However, in a reverse proxy configuration, you must configure the command no cache-bypass on your services, for the CSS to allow CEs to originate cache-miss requests to the CSS.

For all client requests that do not match the EQL in virtual server "transparent-vip," the CSS matches the virtual server called "web-vip." This virtual server simply forwards the client's requests to your server farm for processing.

Ensuring Content Freshness

Recall from Chapter 8 that HTTP uses the implicit control conditional HTTP header "IF-Modified-Since:" (IMS) to determine whether the content residing on the origin server has been changed since the content was originally obtained. On a cache-miss, the CE requests the object from the origin server. The response from the origin server includes a timestamp indicating the time when the origin server issued the fresh piece of content, which the CE stores along with the object in its cache file system. On subsequent hits to the object, the CE populates the "IF-Modified-Since:" header with this timestamp. If the content has been modified on the origin server after the content was loaded on the CE, the origin server sends an "HTTP 200 OK" response to the CE including the latest copy of the requested object. Upon receiving this response from the origin server, the CE generates a new timestamp for the cached object and forwards the object to the client. If, on the other hand, the origin server determines that the requested object has not been modified, it responds with "HTTP 304 Not Modified" to the CE. To ensure that the "If-Modified-Since:" header is accurate, when possible, make sure that your CE and origin server clocks are synchronized.

To enable HTTP request revalidation, use the command

 http reval-each-request {[all] | [none] | [text]} 


Preloading Content

CEs populate themselves on demand by storing copies of content inline with client's requests. For noncached objects, the clients must wait for the content from the origin serverthe CE serves all subsequent requests for the object from its cache. To avoid having your clients wait for downloads from external HTTP or FTP sites or Windows Media Technology (WMT) media servers, you can preload your CE with contentyou cannot preload Real Media files on your CE.

To preload content to your CE, you can create a preload URL list containing the URLs that the CE will traverse. Upon traversal, the CE caches the files for subsequent client requests. A sample URL list is given in Example 13-4.

Example 13-4. Sample Preload URL List

 http://www.cisco.com 3 http://www.cnn.com 2 mms://10.1.10.2 1 ftp://ftp.cisco.com 4 

Each entry contains the depth, indicating the depth of URLs that the CE will recursively visit within the main URL. The default depth is 3. You can store this URL list on an HTTP, FTP, or HTTPS server, or to the local disk on the CE called "loca1," and configure the CE to retrieve the list using the command

 pre-load url-list-file path 


To enable preloading on your CE, you can use the command

 preload enable 


You can also limit the number of concurrent connections that the CE will issue when traversing the URLs in the list, by using the command

 pre-load concurrent-requests num-requests 


To mark the Type of Service (ToS) or Differentiated Services Code Point (DSCP) values for preloaded packets that the CE serves, use the command

 pre-load dscp [set-tos | set-dscp] value 


Transparently Delivering Authenticated Content

Internet applications often require authentication of users before supplying private content to the requesting client. To ensure that authentication between origin servers and clients takes place, the CE performs end-to-end authentication that is transparent to the client and origin server. Once the user authenticates with the external site, the CE can optionally cache the object and the user's credentials for the object, only if the object is authenticated using HTTP Basic authentication. In contrast, the CE does not cache Kerberos, Windows NT LAN Manager (NTLM), or Digest-protected objects, because these methods encrypt the user's passwords using a one-time nonce value. Therefore, they cannot be used for verification more than oncethe CE simply forwards the authentication information and objects transparently between the client and origin server. Figure 13-8 illustrates end-to-end authentication.

Figure 13-8. End-to-End HTTP Authentication


Consider the example that is shown in Figure 13-8 of a client requesting authenticated content from an origin server:

  1. The client sends an HTTP request to an origin server.

  2. The CE intercepts the request and determines that the requested object is not available in its cache file system. The CE then creates another connection to the origin server over which it sends the client's GET request.

  3. The origin server challenges the CE for Basic authentication.

  4. The CE then challenges the client for Basic authentication.

  5. The web browser or media player prompts the client for credentials in a pop-up window and sends them to the CE. The CE stores the credentials for future requests.

    Note

    If your clients were located at a branch office and the origin server was located at your central office, the origin server might challenge the user for NTLM or Kerberos authentication. In this case, the browser would automatically provide the user's Windows login credentials to the CE.


  6. The CE responds to the origin server with the user's credentials.

  7. The origin server responds to the CE with the requested object along with a timestamp for the object.

  8. The CE stores the requested object, along with the user's credentials, in its cache file system. The CE then responds to the client with the requested object.

On future requests for the Basic authenticated object, the CE sends a conditional IMS request to the origin server containing the timestamp and the user's locally cached credentials, without challenging the client again. If neither the object nor credentials have changed, the origin server sends the message "HTTP 304 Not Modified" to the CE. If different users request the object, the process described previously is repeated. For NTLM- or Kerberos-authenticated objects, only the object is cached, and the user is rechallenged for credentials before the cached object is provided to the requesting client.



Content Networking Fundamentals
Content Networking Fundamentals
ISBN: 1587052407
EAN: 2147483647
Year: N/A
Pages: 178

Similar book on Amazon

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net