Not only is the content more readily available by virtue of the fact that several servers are hosting it, but with an integrated CDN in place, a content provider can publish content from origin servers to the network edge. This frees resources on the servers, making the content easier, faster, and more efficient to locate and transfer to the user.
In this section, we talk about CDNs, how they work, and how they can be configured to help your organization. In addition, we take a close look at Cisco's CDN solution, which involves several pieces of hardware and some specialized management software.
A CDN is an overlay network of content, distributed geographically to enable rapid, reliable retrieval from any end-user location. To expedite content retrieval and transmission, CDNs use technologies like caching to push content close to the network edge. Load balancing on a global scale ensures that users are transparently routed to the "best" content source. "Best" is determined by a number of factors, including a user's location and available network resources. Stored content is kept current and protected against unauthorized modification.
When an end user makes a request, content routers determine the best site, and content switches find the optimal delivery node within that site. Intelligent network services allow for built-in security, Quality of Service (QoS), and virtual private networks (VPNs).
The CDN market is growing. Fueling the growth is demand for Internet services, such as Web and application hosting, e-commerce, streaming media, and multimedia applications. While user demand for such services mounts, the challenge for service providers comes in scaling their already congested networks to tap into these higher-margin opportunities.
A CDN allows Web content to be cached-or stored-at various locations on the Internet. When a user requests content, a CDN routes the request to a cache that is suitable for that client. Specifically, it's looking for one that is online, nearby, and inexpensive to communicate with.
Which organizations will get the best results from a CDN? As with any technology and its usability, this is a loaded question. As we've seen time and time again, the only thing that limits how a technology is used is the imagination. However, those who would benefit most from a CDN are those who have an abundance of high-demand files or rich multimedia that would cause a strain on a single network.
However, there is a great deal of merit in using CDN principles to ensure reliability in case of a catastrophe. For instance, if you are storing content in caches in three different states, a natural disaster in your home state won't mean doom for your internetwork. Rather, users will still be able to access your information from one of the other two caches.
Using a CDN doesn't mean that all your data will be spread across the Internet. You can control where your content is located on the CDN and who will have access to it. Specified content is assigned to particular caches, and only those caches are authorized to store that material. By controlling where content is cached, you increase the likelihood that requested content will be present in the cache. This is because there is enough room in the cache to store all the authorized content. Furthermore, controlling content yields better performance results, because you can ensure that a particular cache is handling only the load associated with the content it is authorized to store.
Cisco CDNs let service providers distribute content closer to the end user and deal with network bandwidth availability, distance or latency obstacles, origin server scalability, and traffic congestion issues during peak usage periods, the company said in a statement. The system also enables businesses to expedite application deployment.
A CDN isn't a replacement for a conventional network. Rather, it's used specifically for specialized content that needs to be widely available. Dynamic or localized content, on the other hand, can be served up by the organization's own site, avoiding the CDN, while static and easily distributed content can be retrieved from the nearest CDN server. For instance, the banner ads, applets, and graphics that represent about 70 percent of a typical Web page could be easily offloaded onto a CDN.
The need for a CDN is especially apparent when it comes to multimedia content. Because multimedia is such a bandwidth hog, a lone server cannot possibly tend to multiple, concurrent requests for rich multimedia content.
Figure 11-1 shows the basic design of a CDN.
Figure 11-1: A basic CDN deployment contains a number of components
Let's say you're surfing the Internet and want to watch a video that is online at Content Delivery Networking's Web site (aka http://www.cdning.com). Because that particular video is so popular, they have it deployed on their CDN. Here's what happens when you click the video's icon:
The user's browser requests the URL of http://www.cdning.com/video.mpg.
The user's workstation issues a DNS lookup for the IP address of http://www.cdning.com to a local DNS proxy server.
The local DNS server does not have the IP address for http://www.cdning.com cached, so it queries the DNS hierarchy to determine who the authoritative DNS server is. Then it sends the request to that DNS server.
If the environment has been properly configured, the query ultimately ends up at the Content Router, because its IP address is provided back to the DNS proxy server, who, in turn, sends that IP address back to the original requestor (user's workstation).
The net effect is that the process has resulted in a substitution of the real IP address for the content engine. Now the browser can request the actual file associated with the URL http://www.cdning.com/video.mpg.
Once the file request is received, the CE checks to see if it has a cached copy. If it does, it is sent to the requesting browser. If it is not on the CE, the CE will fetch the file from the original site (the http://www.cdning.com Web site), cache it, and send a copy to the requesting browser.
On the next request for content associated with http://www.cdning.com, the IP address should be cached so that a client request is quickly resolved to a good CE. If the address is no longer cached, the steps would be repeated to find the CE, and then it would provide the content. If the content was no longer deemed to be fresh, or if it was no longer there, then the entire process would be repeated.
CDNs come in a variety of shapes and configurations, based largely on the vendor and the need. Cisco's CDN solution is based on five components-each performing its own specific function in the larger machine. Those components are:
Content Distribution Manager
Content Edge Delivery
No matter what the data being transferred-from a text file to streaming media-the content is delivered using these technologies.
Cisco's Content Distribution Manager is used to automatically distribute content to content delivery nodes located at the network edge. This allows for global provisioning with real-time monitoring. A CDM provides provisioning and policy settings for all content edge delivery nodes within the CDN. Located at the logical central point of the network, Content Distribution Manager allows for the management and control of such details as:
Network devices settings
Automatic replication of content
Interface for live origination
Physically, it can be located within a local cluster or distributed geographically. Other Content Distribution Manager functionalities include:
Central repository for real-time system monitoring
Management of content, including registration of Web sites and live streaming to enable delivery speed and reliability
Redundant configuration for fault tolerance
Publishing tools that enable Web sites to easily subscribe to CDN material without extensive, hands-on setup and maintenance
The core of a Cisco content distribution system is Cisco's Content Distribution Manager, which controls the entire media distribution network.
Cisco used to produce specific CDM appliances; however, those models have been discontinued and their capabilities rolled into content engines.
Cisco's content engines are content networking products that accelerate content delivery, and are where the Application and Content Networking System (ACNS) software is housed. A content engine is a device that caches content in a CDN to serve end-user requests. A collection of content engines makes up a CDN.
ACNS is explained later in this chapter.
In the past, Cisco produced individual content routers and CDMs. However, this functionality has now been incorporated into its line of content engines.
Content engines can be configured as content engines, content routers, or content delivery managers. They cannot, however, be configured to perform two or more functions. That is, a content engine cannot be configured as both a content engine and a content router. It must be configured as one or the other.
The Cisco content engine works with your existing network infrastructure to complete your traffic localization solution. Content engines offer a broad range of content delivery services for service providers and enterprises, including streaming media, advanced transparent caching service, and employee Internet management.
The Cisco content engine product line covers a broad range of environments, from service provider "Super Points of Presence (PoP)" down to small enterprise branch sites. Cisco's content engine product line includes:
CE 7305 This engine includes a default storage configuration of 144 GB, and is expandable to 936 GB. It runs on a 2.4-GHz Intel Pentium 4 Prestonia processor and has 2 GB RAM. This engine can also be connected to the Cisco 7305 content engine SCSI connector or to a Fibre Channel adapter for interfacing with SANs.
CE 7325 This engine includes a default storage configuration of 432 GB, and is expandable to 936 GB (although no further internal storage can be added-it must be expanded externally). It runs on two 2.4-GHz Intel Pentium 4 Prestonia processors and has 4 GB RAM. This engine can also be connected to the Cisco 7325 content engine SCSI connector to or a Fibre Channel adapter for interfacing with SANs.
CE 590 This content appliance is for service provider content PoPs, colocation sites, and large enterprise sites. Upgradeable to support streaming media, it comes equipped with two 36-GB hard drives. Storage is expandable up to 252 GB. It has 16 MB flash memory and 1 GB SDRAM.
CE 565 The engine comes with 72 GB of storage on two internal drives, and can be expanded up to 396 GB using external hardware. The engine uses a 1.7-GHz Intel Pentium 4 processor and 1 GB RAM. The Cisco content engine 565 can be configured with Fibre Channel adapters for interfacing with SANs or with an MPEG video decoder for baseband video.
CE 510 This engine comes with 40 GB of storage and is expandable to 80 GB. It does not support external storage expandability beyond its two internal drives. The engine uses a 1.7-GHz Intel Pentium 4 processor and 512 MB RAM. The Cisco content engine 510 can be configured with Fibre Channel adapters for interfacing with SANs or with an MPEG video decoder for baseband video.
CE 507 This engine comes with 18 GB of storage, with an option for an additional 18 GB of storage and 256 MB RAM. It is geared as an entry-level edge platform for branch offices.
Content Routing is the mechanism that directs user requests to the CDN site. This allows for high scalability and reliability. Routing is based on a set of real-time variables, including delay, topology, server load, and policies, such as the location of content and a user's authorization. Content routing enables accelerated content delivery and adaptive routing around broken connections and network congestion. Cisco's Content Router products interoperate with intelligent services in the network infrastructure, thereby ensuring content availability and providing global load balancing.
The content router nodes are deployed at strategic locations within the network. Their functionality includes:
Real-time content request processing using standard DNS by redirecting user requests to an appropriate content engine based on geographic location, network location, and network conditions
Redundant configuration for multinetwork and wide-area fault tolerance and load balancing
Cisco provides a multitude of content routing protocols that enable enterprises and service providers to build content delivery networks. These protocols enable communication about content state among Cisco networking products. These protocols, which include Director Response Protocol (DRP), Dynamic Feedback Protocol (DFP), Web Cache Control Protocol (WCCP), and Boomerang Control Protocol (BCP), allow Cisco's products to work as a single, seamless system.
Like CDMs, Cisco used to produce specific content router appliances. However, those models have also been discontinued and their capabilities rolled into content engines.
For the speediest delivery of content, Content Edge Delivery (performed by Content Engines) distributes content from the edge of the network to the end user. Cisco's CDN solution allows service providers to define and expand the edge of their network anywhere, from a small number of datacenters near the network core, out to the network edge, and just inside the firewall of a customer.
The content engines are located at the network edge, storing and delivering content to users. Other functionality includes:
Content delivery to end users and other content engines based on the Cisco content routing technology
Self-organization into a mesh routing hierarchy, with other content engines forming the best logical topology based on current network load, proximity, and available bandwidth
Storage of content replicas
Endpoint servers for all media types
Platform for streaming media and application serving
Once the right network foundation is in place, network caches are added into strategic points within the existing network, thereby completing the traffic localization solution. Network caches store frequently accessed content and then locally fulfill requests for the same content, eliminating repetitive transmission of identical content over WAN links.
Content switching is used to intelligently load-balance traffic across delivery nodes at PoPs or distributed datacenters based on the availability of the content, application availability, and server load. Intelligent content switching adds an additional layer of protection against flash crowds and ensures transaction continuity for e-commerce applications in the face of system stoppages. Intelligent content switching also allows for customization of content for select users and types of data.
When Cisco first got into the CDN game, it did not develop switches that were meant just for CDNs. Rather, CDN functionality was included as a component of other types of Cisco switches. Now, however, Cisco has developed its Cisco CSS 11500 Series Content Services Switches. These switches have the functionality to work in cooperation with the other devices in the Cisco CDN solution.
There are three models in the Cisco CSS 11500 Series Content Service Switch line:
CSS 11501 This model has a fixed configuration, offers 6-Gbps aggregate throughput, eight 10/100 Ethernet ports, and one Gigabit Ethernet port, through an interface converter. This model holds up to two 256-MB flash memory disks, up to two 512-MB hard drives, and can conduct up to 1,000 SSL transactions per second.
CSS 11503 This model has three slots, offers 20-Gbps aggregate throughput,32 10/100 Ethernet ports, and six Gigabit Ethernet ports through interface converters. This model holds up to two 256-MB flash memory disks and up to two 512-MB hard drives.
CSS 11506 This model has six slots, 40-Gbps aggregate throughput, 80 10/100 Ethernet ports, and eight Gigabit Ethernet ports, through interface converters. It requires one switch control module. This model holds up to two 256-MB Flash memory disks and up to two 512-MB hard drives.
The heartbeat of CDN is intelligent network services. This provides such functions as security, QoS, VPNs, and multicast. The Cisco CDN system integrates with existing content-aware services, which are required to build intelligent CDNs.
Key services that content intelligence provides include:
Traffic prioritization for content
Services that scale economically and respond appropriately to unpredictable flash crowds
The ability to track content requests and respond with content updates and replication
Because the functionality of a CDN is dependent on processing a number of variables, intelligent network services are crucial to maintaining an efficient, effective CDN.
IP multicasting is a bandwidth-efficient way to send the same streaming data to multiple clients. Such applications that benefit from IP multicasting include videoconferencing, corporate communications, and distance learning. Rather than consume large amounts of bandwidth by sending the same content to multiple destinations, multicast packets are replicated in the network at the point where paths separate. This results in an efficient way to conserve network resources.
How It Works IP multicasting is ideal when a group of destination hosts are receiving the same data stream. This group could be comprised of anyone, anywhere. It could be a training video sent to all new hires at a company's headquarters, or it could be updated benefits information sent, simultaneously, to the Human Resources departments at numerous branch offices. The hosts can be located anywhere on the Internet or on a private network.
We'll now examine the different types of transmission services in order to nail down, more precisely, what is going on in a multicast. Let's first consider the different types of network traffic-unicast, multicast, and broadcast-shown in Figure 11-2:
Unicast Applications send one copy of each packet to the users requesting the information. If one user is linking to the Web server and requesting information, this isn't so bad. However, if multiple users want the same content, this gobbles up system resources as the same packets are sent to each user simultaneously. That is, if there are 30 users requesting the same content, 30 copies of the data will be sent at the same time.
Broadcast Applications can send one copy of each packet to a broadcast address. That is, the information is sent to everyone on the network. While this preserves bandwidth, because the same content is being routed to everyone (rather than multiple copies of the content being sent at once), it suffers, because there will be times when various users neither want, nor need, to see the content.
Multicast Applications send one copy of the packet and address it to a group of selected receivers. Multicast relies on the network to forward packets to the networks and hosts that need them. As such, this controls network traffic and reduces the quantity of processing performed by the hosts.
Figure 11-2: Different types of network traffic include unicasting, broadcasting, and multicasting
Multicasting has a number of advantages over unicasting and broadcasting. While unicasting is an effective way to bring content to a single host, when the same content must be sent to multiple hosts, it can cripple the network by consuming bandwidth. Broadcasting, on the other hand, is a good way to conserve network resources (a single copy of the data is sent to every user on the network). While this resolves bandwidth-consumption issues, it is not useful if only a handful of users need to see the information.
IP multicasting solves the bottleneck problems when data is being transferred from one sender to multiple destinations. By sending a lone copy of the data to the network and allowing the network to replicate the packets to their destinations, bandwidth is conserved for both sender and receiver.
Figure 11-3 shows how IP multicast delivers data from one source to multiple, appropriate, recipients. In this example, users want to watch a videocast training them on a new application. The users let the server know they are interested in watching the video by sending an IGMP (Internet Group Management Protocol) host report to the routers in the network. The routers use PIM (Protocol Independent Multicast) to create a multicast distribution tree. The data stream will be delivered only to the network segments that lie between the source and receivers.
Figure 11-3: Multicast distribution trees send content to the appropriate network segments
Users opt in to be part of a group by sending an IGMP message. IGMP is a layer-3 protocol, allowing hosts to tell a router that it is interested in receiving multicast traffic for a particular group or groups. IGMP version 2 added the ability to leave a group. This made it easier for routers to know that a given host was no longer interested in receiving the multicast, thus freeing up more network resources.
Addressing Addressing is an important component in the world of IP multicasting. Once a client opts in to be part of a group, the content is delivered to a single IP address. When the data is sent to that IP address, the network, in turn, delivers it to everyone who agreed to be in the multicast group. By using a single IP address, the network can handle the task of channeling data to the appropriate clients.
The Internet Assigned Numbers Authority (IANA) manages the assignment of IP multicast addresses. It has assigned the Class D address space for use in IP multicast applications. Class D address space falls between 126.96.36.199 and 188.8.131.52. There are no host addresses within Class D address space, since all hosts in the group share the group's common IP address.
However, this is not to mean that one multicast address will suit each and every need. Within the Class D address space, IP addresses have been subdivided for specialized use. The following examines how the Class D address space is further stratified:
184.108.40.206 through 220.127.116.11 For use only by network protocols on a local network segment. Packets with these addresses should not be forwarded by a router. Rather, they stay within the LAN segment and always are transmitted with a TTL value of 1.
18.104.22.168 through 22.214.171.124 Called globally scoped addresses, these addresses are used to multicast data between the source and across the Internet.
126.96.36.199 through 188.8.131.52 Called limited scope or administratively scoped addresses, these are tied to an organization. Routers are configured with filters to prevent multicast traffic in this range from leaving the private network. Also, within the organization, this range of addresses can be subdivided within internal boundaries, thus allowing the reuse of addresses on smaller domains.
Another means of multicast addressing is called GLOP addressing. RFC 2770 suggests that the 184.108.40.206/8 address range be reserved for addresses by organizations that already have an Autonomous System Number (ASN) reserved. The ASN of the domain would then be converted and be made part of the second and third octets of the 220.127.116.11/8 range to generate a static multicast address for that organization.
An ASN is a globally unique identifier for an Autonomous System. Autonomous Systems are groups of networks that have a single routing policy managed by the same network operators.
For instance, an organization with an ASN of 24545 would have a multicast IP address of 18.104.22.168. This conversion first takes the ASN (24545) and converts it into hexadecimal. The hexadecimal value is separated into two octets and then converted back to decimal to give us a subnet that is reserved for ASN 24545 to use.
A CDN is a constantly changing environment. Content engines, content switches, and content routers are added and removed, and the content housed on those devices is in a constant state of flux, because content providers come and go. New routed domains are defined, old ones are removed, and assignments of routed domains to content engines change.
Cisco Application and Content Networking System (ACNS) software is targeted at organizations and service providers deploying CDNs. ACNS 5.3 is the latest version of this software, which runs on content engines and the Cisco Wide Area Application Engine (WAE), combining content networking components into a common application for Content Distribution Manager, content engine, and content router. This application is useful for both small and large CDN deployments.
ACNS is the core application behind Cisco's CDN and IP video solutions. It allows content and video to be transmitted from the datacenter to remote locations, including:
Secure Web content
Web application acceleration
Point-of-sale video and Web kiosks
ACNS can also be used to deliver antivirus updates and security patches across a network.
ACNS can manage CDN deployments of up to 2,000 content engines and 1,000,000 prepositioned items in content engines. ACNS software pulls content from a Web server or an FTP server and sends it directly to the content engines.
ACNS combines demand-pull caching and prepositioning to accelerate the delivery of Web applications, objects, files, and media. ACNS runs on Cisco content engines, CDM, and Content Routers.
In an IP video environment, ACNS can be used with the Cisco IP/TV components to capture and deliver MPEG video with synchronized presentation, program creation, and scheduling.
In a CDN environment, content engines can be used with the Cisco CSS 1150 Series Content Service Switch, the Catalyst 6500 Series Content Switch Module, and the Secure Sockets Layer (SSL) switching modules for reverse-proxy caching, thereby offloading back-end servers.
Reverse-proxy caching is explained later in this chapter.
ACNS benefits from a number of features, including the ability to configure the system to run both cache and CDN applications simultaneously. Network administrators can also upgrade ACNS software, or they can downgrade to a previously installed version if they determine the new version is not as useful as a previous installation. ACNS also allows for disk provisioning, providing the management of disk space for HTTP caching and for prepositioned content.
The Cisco Wide Area Application Engine (WAE) Series are a line of network appliances for providing access to applications, storage, and content across a WAN. WAE is used in conjunction with ACNS and Cisco Wide Area File System (WAFS) software (we'll talk about that in the next section). These products and technologies allow LAN-like access to applications and data across the WAN. Some benefits of this solution include:
Enterprise applications, like enterprise resource planning (ERP), customer relationship management (CRM), and intranet portals
Furthermore, WAE allows branch offices to be able to utilize infrastructure across the WAN, including servers, backup, and storage, while placing only a WAE appliance at each branch. This is illustrated in Figure 11-4.
Figure 11-4: WAEs reduce storage burden on branch offices, centralizing it at a datacenter
WAE application engines include:
Cisco WAE-512 Wide Area Application Engine This device is based on an Intel 3.0-GHz Celeron Processor and is aimed at small offices. The unit comes with 1 GB and is expandable to 2 GB memory. It has two Serial ATA disk drive bays for between 80 and 500 GB of storage capacity. The unit can be fitted with a Fibre Channel or host bus adapter (HBA) to interface with a SAN or an MPEG video decoder.
Cisco WAE-612 Wide Area Application Engine The next step up is the Cisco WAE-612. Using a 3-GHz Intel Pentium 4 processor, the WAE-612 has more memory than the 512, with 4 GB maximum (it comes with 2 GB RAM standard). The unit has two Serial ATA disk drive bays and can store between 292 GB and 600 GB total capacity. The appliance can be fitted with a Fibre Channel or HBA to interface with a SAN or an MPEG video decoder.
Cisco WAE-7326 Wide Area Application Engine Cisco's top line WAE is the WAE-7326. It uses two Intel Xeon processors and 4 GB of memory. It supports two to six internal SCSI hard drives for a total capacity of 1.8 TB. The unit can be fitted with a Fibre Channel or HBA to interface with a SAN or an MPEG video decoder.
Another component of Cisco's WAN CDN solution is its Cisco Wide Area File Services (WAFS). It eliminates WAN latency and bandwidth limitations, and strives to give WAN users LAN-like speed and bandwidth. This technology helps the consolidation of branchoffice data into central file servers in the organization's datacenter.
By centralizing an organization's data, rather than keeping it all scattered in branch offices, the following benefits can be realized:
Lower costs File and print services at branch offices replace unreliable and expensive tape backup and file servers.
Improved data protection Data generated at branch offices is sent to the datacenter in real time. This improves data protection, management, and storage efficiency.
Reduced administration Data can be centrally managed at the datacenter.
Fast file access and sharing With LAN-like speeds now on a WAN, remote users can enjoy increased productivity.
WAFS uses new protocol optimization technologies to give branch and remote office users LAN speeds, thereby reducing WAN latency, bandwidth, and packet-loss limitations.
Branch offices consolidate their file servers and storage into central file servers or network-attached storage (NAS) devices at the organization's datacenter.
WAFS uses protocol optimizations, including:
WAN transport optimizations
This ensures the operation of standard file-system protocols, such as Common Internet File System (CIFS) with Windows and Network File System (NFS) with UNIX. This maintains file integrity and security policies.
WAFS 3.0 runs on the Cisco Wide Area Application Engines outlined in the previous section. In addition to the WAE appliances, WAFS runs on the router-integrated network module.
Within a WAFS solution, each node can be configured with one or more services:
The Edge File Engine Used at branch offices to replace file and print servers, this allows users to enjoy near-LAN speeds with read and write access to the datacenter.
The Core File Engine Used at the datacenter and connected through the LAN to NAS devices, this engine is responsible for providing aggregation services for Edge File Engines.
The Cisco WAFS Central Manager Provides Web-based management and monitoring of all WAFS nodes.
Though the WAE appliances can be configured with any service, only the router-integrated network module can be configured as an Edge File Engine.
Figure 11-5 shows an example of how these services are deployed.
Figure 11-5: WAE appliances are configured for specific content duties within the network