Content Caches and Content Engines


A content cache transparently caches, or stores, content that is frequently accessed so that it can be retrieved from the cache rather than from a distant server. A content engine can extend this caching functionality by interacting with a content distribution and management device, and optionally content routers, to store selected content and retrieve it on request, as part of a CDN.

Note

The type of software running on Cisco content engines can determine the features supported by the device. For example, the Cisco 7320 content engine is available with a choice of cache software (providing only transparent caching), CDN software, or Application and Content Networking System (ACNS) software.[1] The ACNS software combines the caching and CDN functionality.

Content engine functionality is also available on modules that fit into the Cisco modular routers.

Some content-engine hardware that runs ACNS software can be configured with a choice of personalities: as a content engine, a content router, or a Content Distribution Manager.[2] In fact, Cisco stand-alone Content Distribution Managers have been phased out in favor of the ACNS-enabled content engine.

(Note that a device can have only one personality at a time; it cannot perform multiple functions simultaneously.)


Caching is best suited for data that doesn't change often, such as static application data and web objects, versus entire web pages, which might include frequently changing objects.

Key Point

When not used with a content router, a content engine can be deployed in a network in three ways: transparent caching, nontransparent caching (also called proxy caching), and reverse proxy caching.


Transparent caching, nontransparent caching, and reverse proxy caching are described in the following sections. The use of a content engine with a content router is described in the "Content Routing" section, later in this chapter.

Content engines can also be configured to preload specific content from an origin web server that stores the primary content, and to periodically verify that the content is still current, or update any content that has changed. This is described in the "Content Distribution and Management" section, later in this chapter.

Transparent Caching

Key Point

A network that uses transparent caching includes a content engine and a Web Cache Communication Protocol (WCCP) enabled router. WCCP is part of the Cisco Internet Operating System (IOS) router software (available in some IOS feature sets) and is the communication mechanism between the router and the stand-alone content engine.


Transparent caching is illustrated in Figure 8-1.

Figure 8-1. With Transparent Caching, a WCCP-Enabled Router Passes Users' Requests to the Content Engine


Note

The WCCP-enabled router in this scenario is not a content router; it is simply an IOS routerwith WCCP functionality. Refer to the Feature Navigator tool at http://www.cisco.com/go/fn to determine the feature set required to support WCCP for various IOS platforms.


Transparent caching operates as follows:

Step 1.

In Figure 8-1, the user at workstation A requests a web page that resides on the web server. This request is received first by the WCCP-enabled router.

Step 2.

The router analyzes the request and, if it meets configured criteria, forwards it to the content engine. For example, the router can be configured to send specific Transmission Control Protocol (TCP) port requests to the content engine, while not redirecting other requests.

Step 3.

If the content engine does not have the requested page, it sends the request to the server.

Step 4.

The server responds to the content engine with the requested data.

Step 5.

The content engine forwards the web page to the user and then caches it for future use.

At Step 3, if the content engine did have the requested web page cached, it would send the page to the user. After the content engine has the content, any subsequent requests for the same web page are satisfied by the content engine, and the web server itself is not involved.

Transparent caching can also be deployed using a Layer 4 switch instead of a WCCP-enabled router. In this case, a content switch transparently intercepts and redirects content requests to the content engine.

The benefits of transparent caching include a faster response time for user requests and reduced bandwidth requirements and usage. User workstations are not aware of the caching and therefore do not have to be configured with information about the content engine. Content engines in transparent mode are typically positioned on the user side of an Internet or WAN connection.

Nontransparent Caching

Key Point

Nontransparent caching, as its name implies, is visible to end users. As such, workstations must be configured to know the address of the content engine; the content engine acts as a proxy.


Note

A proxy is an action performed on behalf of something else (for example, a proxy vote is one that you give to someone else so that she can vote on your behalf). In networking, a proxy server (also sometimes referred to as simply a proxy) is a server that accepts clients' requests on behalf of other servers. If the proxy has the desired content, it sends it to the client; otherwise, the proxy forwards the request to the appropriate server. Thus, a proxy server acts as both a client (to the servers to which it connects) and a server (to the client that is requesting the content).


Nontransparent caching is illustrated in Figure 8-2.

Figure 8-2. With Nontransparent Caching, the Content Engine Acts as a Proxy Server


This scenario operates as follows:

Step 1.

In Figure 8-2, the browser on workstation A is configured with the content engine as its proxy. The user at this workstation requests a web page that resides on the web server. This request is therefore sent to the content engine.

Step 2.

Assuming that the content engine has been configured to handle the protocol and port number in the received request, the content engine checks to see whether it has the requested page. If the content engine does not have the requested page, it sends the request to the server.

Step 3.

The server responds to the content engine with the requested data.

Step 4.

The content engine forwards the web page to the user and then caches it for future use.

At Step 2, if the content engine had the requested web page cached, it would send the page directly to the user. Similar to transparent caching, after the content engine has the content, any subsequent requests for the same web page are satisfied by the content engine; the web server is not involved.

This nontransparent caching shares the benefits of faster response time and reduced bandwidth usage with transparent caching. An additional benefit of nontransparent caching is that it does not require WCCP-enabled routers; however, a drawback is the requirement to configure workstation browsers with the address of the content engine.

Content engines in nontransparent mode are also typically positioned on the user side of an Internet or WAN connection.

Reverse Proxy Caching

Reverse proxy caches are positioned on the server side of an Internet or WAN connection to help alleviate the load on the server, as illustrated in Figure 8-3.

Figure 8-3. Reverse Proxy Caches Help Alleviate Server Load


Key Point

Reverse proxy mode is different from the previous two modes discussed because its goal is not to reduce bandwidth requirements but rather to reduce load on the server.


The steps involved in reverse proxy caching are as follows:

Step 1.

In Figure 8-3, the user at workstation A requests a web page that resides on the web server. This request is received by the WCCP-enabled router on the server side of the Internet.

Step 2.

The router analyzes the request and, if it meets configured criteria, forwards it to the content engine. For example, the router can be configured to send specific TCP port requests to the content engine while not redirecting other requests.

Step 3.

If the content engine does not have the requested page, it sends the request to the server.

Step 4.

The server responds to the content engine with the requested data.

Step 5.

The content engine forwards the web page to the user and then caches it for future use.

At Step 3, if the content engine had the requested web page cached, it would send the page to the user. After the content engine has the content, any subsequent requests for the same web page are satisfied by the content engine, and the web server itself is not involved, thus reducing the load on the server.

Key Point

A variety of content caches can be deployed throughout a network, in any combination of these three modes.

Clusters of caches can also be deployed to provide redundancy and increased caching capacity.





Campus Network Design Fundamentals
Campus Network Design Fundamentals
ISBN: 1587052229
EAN: 2147483647
Year: 2005
Pages: 156

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net