Cisco Content Networking Solutions

Cisco offers complete solutions to meet customer's business needs, including IP telephony, security, optical, and storage solutions. Cisco also offers complete content networking solutions in order to achieve application acceleration.

The following networking environments are available for Cisco's content network solutions:

  • Enterprise Campus Typically an environment that includes a main building to house central institutional or corporate resources. Smaller buildings are served by the main building and contain only client workstations.

  • Enterprise Edge An Internet data center environment owned and maintained by the enterprise, normally residing in the main campus building.

  • Branch Office Remote, or regional, office locations connect to the main campus or head-office location through WAN links.

  • Internet Data Center Third-party data center environments for which enterprises may outsource facility, bandwidth, server hardware, and application hosting services.

    Two or more of these types of data centers, connected over a geographically distributed network and enabled with the ACNS software, together make an Internet Content Delivery Network (ICDN). ICDNs require numerous content networking technologies to work together with unique content billing requirements. Content-based billing is necessary in an ICDN to generate revenue for customer bandwidth, content rule, and cache hit/miss usage as well as to apply various QoS policies based on different price plans. Numerous Cisco certified partners offer solutions that integrate with Cisco's content networking products for content billing.

The first three environments, when enabled with Cisco's content networking software, together form an Enterprise Content Delivery Network (ECDN).

The primary differences between ECDNs and ICDNs are the billing requirements, mentioned previously. That is, the software is the same in both, but in an ICDN scenario additional software for content billing is necessary. For details on each of these four environments see Chapter 4.

Content Switching

Content is switched in much the same way that frames are switched in Layer 2 Ethernet networks. With Ethernet, frames are forwarded to an appropriate switch port based on information in the frame header. That is, the switches provide Layer 2 intelligence to the routers in the network. In comparison, content switching provides Layers 57 intelligence to the origin servers and clients in the network. The content is inspected and forwarded to the most appropriate system based on information in the packet headers and payload. Content switching includes replication of a single system, formation of a group of systems of identical functionality, and distribution of client requests across them.

Content switching is used in the following scenarios:

  • Server and Cache Load Balancing (SLB)

  • Firewall Load Balancing (FWLB) and VPN Load Balancing

  • Global Server Load Balancing (GSLB)

Server Load Balancing (SLB)

SLB devices use health metrics, such as server response time and number of connections, as criteria for determining which origin server should receive a request for content. The health of the server is determined based on responses received from the servers in reply to health-check traffic generated by the content switch. Users are directed to the best origin server available at the moment of the request.

The challenges associated with balancing requests across multiple systems are addressed with the various content switching algorithms discussed in Chapter 10, "Exploring Server Load Balancing." You can also load balance caches using SLB devices and Web Cache Control Protocol (WCCP), as you will learn about in Chapter 13, "Delivering Cached and Streaming Media."

Firewall Load Balancing (FWLB) and VPN Load Balancing

Firewalls often contain built-in failover mechanisms for firewall availability services. For example, the Cisco PIX firewall uses a proprietary stateful failover mechanism for the standby firewall to know when to take over processing for the active firewall. Technically, you also can use FWLB to manage availability, but in most cases Cisco recommends using its proprietary mechanism for Cisco PIX failover. Furthermore, scaling the Cisco PIX firewalls by upgrading to a higher series firewall is often less costly in terms of financial investment in the hardware and resources required to manage the design. Load balancing numerous lower-end Cisco PIX firewalls with content switches requires much more overall hardware and resources to manage the solution. Unless many millions of concurrent connections and many gigabits per second of bandwidth are required, the highest series of Cisco PIX firewall should be able to handle the load. FWLB is more useful in the following circumstances:

  • Scalability and availability services when firewalls other than the Cisco PIX are used.

  • Migration from one firewall vendor to another.

  • Load balancing firewalls from multiple vendors in order to provide a diverse security scheme.

FWLB provides scalability and availability to firewalls in the same way that the SLB is used for origin servers, but with slightly more complexity and involved configurations. For example, FWLB does not support asynchronous routing. That is, the return traffic of a connection must be routed back through the originally selected firewall, in order for the firewall to reconcile traffic from like connections; otherwise, the firewall will drop the traffic. Additionally, "buddy" TCP connections from applications that originate connections in the reverse direction of the original connection, such as Active-FTP, must be sent through the originally selected firewall.

Cisco content switching also supports load balancing of signaling and data packets of VPN protocols, such as IPsec, Point-to-Point Protocol (PPTP), and Layer 2 Tunneling Protocol (L2TP) for scalability and redundancy of VPN devices.

Global Server Load Balancing

Although origin server redundancy within a single data center is achieved with SLB, global server load balancing (GSLB) is required when the following issues occur:

  • The entire infrastructure that houses the server farm goes down or the data center itself experiences a disastrous power outage or fire. The developers of ARPANET at the US Department of Defense (DoD) used this concept in developing IP in the 1960s. The idea was that, if one US communications hub was destroyed during war, another available hub would route information seamlessly in its place. The concepts of disaster recovery in GSLB follow the same basic principle as used by the US DoD but at Layers 57 of the OSI model.

  • Response times for content or DNS requests or both from clients in geographically dispersed locations cause perceived performance degradation. In the same manner that content edge delivery resolves response time issues by placing content closer to the clients, GSLB places the data centers themselves in closer proximity to clients.

  • The capacity of the current data center location has reached its bandwidth or physical limits and cannot handle an increase in load. Additional load can be relieved from the current data center to other data centers with GSLB, which enables the required growth.

Application and Content Networking System

The Application and Content Networking System (ACNS) provides customers an integrated system that consists of content edge delivery, content distribution, and content routing. The following solutions are available to a network installed with ACNS software:

  • E-learning Providing educational material with an e-learning solution realizes major cost savings related to travel and accommodations for employees or students to attend in-person training.

  • Corporate communications Distribution of company events, messages, and news with ACNS enables employees to quickly adapt to changes in corporate initiative and structure.

  • Point-of-sale videos and web kiosks Retailers can use point-of-sale videos and web kiosks for customer-directed advertising or for employee product and procedural training.

  • File/software distribution and access Response times for resources from branch offices using ACNS and, in particular, the Cisco Wide Area File Services (WAFS) can be reduced. WAN bandwidth to the branches can be expedited with these features as well.

Content Edge Delivery

Most medium to large enterprises require regional office locations in closer proximity to areas of potential sales than the headquarters location. However, mission critical corporate applications, such as intranet web portals, Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), and the database and file servers that store data for these applications, reside at the headquarters. Similarly, other applications, such as corporate Internet access, e-mail, and video streaming, are also normally located at the corporate headquarters. In most cases, installing these applications in the regional branches proves too costly for the majority of enterprises.

However, employees at these locations demand the level of service they would have if the applications were installed locally. Realizing that revenue will most definitely be affected by performance decreases and increases in bandwidth usage at the edges of the network, corporate executives also expect high availability and performance at these branch locations. With content edge delivery, content is served directly from the branch locations, in closer proximity to the employees.

There are two common content edge delivery technologies employed to solve issues surrounding bandwidth and latency issues in branch locations:

  • Content Caching

  • Streaming Media Delivery

Content Caching

Caching is by far the most commonly used feature in the short history of content networking topics. Generally speaking, the nature of content requests leans toward the popular 80/20 axiom in network computing: 80 percent of requests are for 20 percent of the content. To decrease server load, content is offloaded from an origin server to a device whose sole job is to deliver frequently requested content objects, in closer proximity to clients. Content is populated into the caches on-demand, as clients make the requests. How requests for content are handled by the local cache as opposed to the origin server itself is discussed in Chapter 13.

When users request an object from an edge delivery device, how can they be sure that it is identical to the content that resides on the origin server? Intelligent content freshness determination in globally and locally cached networks is essential to provide users with the most up-to-date version of the requested object. Therefore, any changes in the original content can be dynamically detected and uploaded by the edge caches, transparently to both client and server.

As with most aspects of network design, content edge delivery deployments also require attention to ensuring high availability and fast performance. As such, specific local content switching mechanisms are also applicable in designing availability into an edge deployment.

Streaming Media Delivery

Streaming media solves major issues with viewing video media on a network. Users no longer have to wait for downloads to complete before viewing video content online. Furthermore, the client video player displays video frames directly to the user as packets arrive on the network, which is similar to standard television broadcasting. As a result, live and scheduled video feeds are made possible with streaming media.

Streaming media benefits from content edge delivery by ensuring that only a single stream of video content traverses the network at any given time, resulting in bandwidth cost savings. Furthermore, streaming media can also benefit from caching, by storing the video feed locally for later viewing by clients.

Cisco's streaming media solution as supplied by ACNS within Content Engines, Cisco IP/TV, and industry partner streaming software, such as Real Networks, will be detailed in Chapter 13.

Content Distribution and Routing

Content distribution occurs in advance of client requests in order to position content into desirable locations within the network. In contrast, recall on-demand caching, in which content is populated to edge delivery devices as requests for content are fulfilled by the origin servers.

The act of distributing content throughout the network can be as taxing on networking resources as its delivery. Chapter 6, "Ensuring Content Delivery with Quality of Service," covers methods of minimizing network resource consumption during the distribution process, including IP Multicast, resource reservation, packet queuing, and scheduling.

Once the content network is populated with fresh content, requests for objects containing cached content are processed, in order to determine the best location of the requested content. In the same way that IP routers relay IP datagrams to their appropriate destination, content routers relay application messages. Content routers offer the best content to requesting clients based on factors such as geographic location, network load, and delay.

In the past, content has been a value-added service to customers. In this new millennium, content services are seen as a potential revenue generator. As such, third-party advertisement insertion, URL filtering, virus scanning, and e-mail spam filtering using various technologies can be performed on the content before delivery to the client. The section called "Enabling Transparent Value-Added Services on your CEs" of Chapter 13 discusses how these value-added services are performed.

Content Network Partnership Program

The Cisco Content Networking Partner Program enables third-party software companies to extend the Cisco ACNS infrastructure to provide complete end-to-end solutions to meet customers' business needs. The group of partner companies is known as an ecosystem, and each must fulfill certification criteria set by Cisco in order to become certified for membership. The main criterion is the use of standard interfaces to interoperate with Cisco's content networking products. The benefit of membership is the marketing program in place to promote partner software products coupled with Cisco's content networking infrastructure. In addition to this, customer solutions are guaranteed rigorous testing and verification before deployment.

End-to-end solutions offered by ecosystem partners may include e-learning, corporate communications, content filtering capabilities, and software and file distribution. These solutions can be offered to the customer as a managed service, located in the partner's data center, or installed as an enterprise server solution, requiring dedicated server hardware behind the customers' own firewall. Either way, Cisco's ACNS architecture is at the heart of the solution and provides the intelligence to ensure that the content is delivered reliably and efficiently to the end users.

Partners can offer solutions using any of the following content delivery categories:

  • Content Management

  • Content Distribution

  • Content Providers

  • E-Learning Applications

  • Content Filtering and Scanning

Content Management

Content management partners offer applications and databases used for indexing, searching, and retrieving content. A typical customer may be a corporation that requires rapid distribution and retrieval of audio/video content for corporate communications and training.

Content Distribution

Content distribution partners provide efficient mechanisms to replicate content over low bandwidth and unreliable network links for delivery to Cisco content networks. These partners may have patent application protocols to improve efficiency and ensure security in distributing information to remote sites.

Content Providers

Content providers create content for training and corporate communications, such as video-on-demand, webcasts, or Macromedia Flash-based presentations. Providing packaged content for e-learning, corporate communications, product marketing, and sales support gives organizations the ability to concentrate on operational aspects of business rather than the production of content.

Producing educational material electronically and making it interactive and accessible for viewing anytime reduces training costs associated with instructor-led training and increases information retention rates among learners.

E-Learning Applications

An e-learning application is an enterprise-wide learning system containing tools for creating, delivering, and managing content, for live and on-demand training or information-exchange portals. The application may contain event administration, promotion, registration, and management functionality for company-wide collaborative events. Organizations may centrally prescribe personalized training to individuals or groups of employees on standard employee procedures, new-hire training, and mentoring. Organizations are also able to track employee competencies and certifications with an e-learning application.

Content Filtering and Scanning

Content filtering provides a means to control a user's online accessibility to content, increase employee productivity, and eliminate the organization's legal liability of inappropriate Internet access and instant messaging. These tools enable administration of access settings to enforce an organization's standards and ethics. Reporting on employee usage is also available.

Content scanning involves, as the name indicates, capabilities to scan content for anomalous items, such as viruses, before sending across the network.

Content Networking Fundamentals
Content Networking Fundamentals
ISBN: 1587052407
EAN: 2147483647
Year: N/A
Pages: 178

Similar book on Amazon © 2008-2017.
If you may any questions please contact us: