Purpose and Goals


In most aspects of life, a need or problem often encourages creative efforts to meet the need or solve the problem. That is, necessity is often the mother of invention. This also pertains to network computing, where development is spurred by ever increasing end-user demands for richer content, more bandwidth, and increased reliability. To fulfill these demands, first you must address the following four areas:

  • Scalability and Availability

  • Bandwidth and Response Times

  • Customization and Prioritization

  • Security, Auditing, and Monitoring

Scalability and Availability

Different types of applications require increases to their performance levels. For example, a web application may require enhancements to its functionality and intelligence (that is, the computer programming code), and the current computer system does not have the resources to yield the same levels of performance as before. Another example might be with a corporate communication application, in which the number of participants has increased and been distributed over a large geographic region. These types of situations may require an increase in the scalability and availability of an application.

Scaling the Application

Content networking extends scalability services to the application by providing room for future growth without changing how the application works and with minimal changes to the network infrastructure. Scalability services include the following technologies, which will be discussed in detail throughout this book:

  • Content edge delivery Positioning application content away from the origin server, and in closer proximity to clients, scales the application by offloading requests to the content network.

  • Enhanced content delivery with IP multicast, stream-splitting, and resource reservation IP multicast and stream-splitting scales the network by avoiding replication of identical flows over the same network link, thus minimizing end-to-end bandwidth consumption of content delivered to a large number of users. Resource reservation scales the application by manipulating network parameters to expedite application traffic delivery.

  • Content transformation and prioritization Transformation provides conversion of content within the network without further burdening of origin servers. Prioritization enables custom network delivery of application traffic.

  • Flash crowd protection Protection against sudden, but valid, traffic spikes directed toward an application is important to maintaining service levels to customers.

Increasing Application Availability

The general idea behind designing a system for availability is the addition of one or more components that are more or less identical to the first, without changing the overall structure of the existing individual components.

Availability services include the following, which will be discussed throughout this book:

  • Content switching Increases availability by replicating origin server content across numerous identical systems, either within the same data center or across globally distributed data centers.

  • Session redundancy Session redundancy provides failover from one network device, such as a firewall or load balancer, to an identical device without dropping existing TCP connections.

  • Router redundancy Protocols, such as Hot Standby Router Protocol (HSRP) and Virtual Router Redundancy Protocol (VRRP), provide router gateway redundancy by having two routers or load balancers share a virtual IP (VIP) and MAC address for clients to use as their default gateway. If either fails, the other will take over within seconds.

  • IP routing redundancy Dynamic IP routing protocols, such as OSPF, EIGRP, and IS-IS, provide availability within a routing domain by maintaining multiple paths to each network in the routing table.

  • Layer 2 switching redundancy Spanning tree and Cisco Etherchannel provides Layer 2 redundancy in a switched environment.

Availability does not necessary follow scalability. For example, you can scale the disk drive capacity of a computer system by adding another hard drive, but if any one of those drives fails, loss of data is certain. Only when replication across the system occurs, such as with use of the RAID protocol in this example, is availability possible. Router gateway redundancy has been around since the mid-1990s, with such protocols as HSRP and VRRP. However, application redundancy built directly into the network is a newer concept that follows the same basic premise. That is, it enables any individual component to fail without significantly affecting overall performance. In the same way that HSRP protects against network faults, application redundancy provides application and business continuity in the event of unexpected application failure.

Scheduled hitless application upgrades to replicated origin servers are possible with content networking availability services. By taking one server down at a time and allowing existing connections to complete prior to upgrading, the entire server farm remains available. Chapter 9, "Introducing Streaming Media," discusses Cisco's content networking availability services.

Looking at some simple probabilities, let us say that a single origin server is shown to be available 95.5 percent of the time, based on the empirical behavior data of the application. The 4.5 percent downtime in this example may account for scheduled server upgrades and unexpected system crashes. A simple formula to estimate the probability of an entire server farm failing is

PServerfarm_Failure = 1 PIndividual_Failuren = 1 (1 PIndividual_Success)n

In this formula, n is the number of redundant servers and PIndividual_Success is the proportion of time that the original server is measured as available.

Replicating the system above and distributing load between two identical servers will provide 1 (1 0.955)2 = 0.99798 or 99.937 percent availability. In order to achieve "five nines of availability," or 99.999 percent uptime, how many servers are needed? With three servers, we would have 1 (1 0.955)3 = 0.99990 = 99.998 percent, and with four servers, 1 (1 0.995)4 = 0.99999 = 99.999 percent. Therefore, with this simple formula, four redundant servers are required to provide 99.999 percent availability. But is this math a practical way to calculate availability? The answer is: it depends. Balancing the load across numerous identical servers is not necessarily transparent at the application level. Depending on the type of application, its logic may require modification in order to support a load balanced environment. As a result, the probability of failure may not decrease as steadily for certain applications as for others, when new nodes are added to the farm.

When designing a network application, there are many questions for you to consider in addition to those addressed by the simple math discussed previously:

  • What is the type of application?

  • Where should the content be located and is local high-availability sufficient or should cross-site availability be considered?

  • What are the security concerns and is encryption necessary?

Throughout this book, these questions and more like them will be answered when discussing concepts and configuring content network examples and scenarios.

Bandwidth and Response Times

In the 1990s, users accepted waiting upwards to 10 seconds for viewable content to download to browsers or for network file copies to complete. With the inexpensive increases in bandwidth availability to the desktop, which now reach gigabits per second, and through enhanced last-mile Internet access technologies, waiting more than a few seconds is no longer acceptable. However, within the network core, building additional infrastructure to increase bandwidth and decrease response times can be extremely expensive. Fortunately, in the past, various technologies have been used to make upgrades less expensive. Consider the following examples of using technology to increase capacity and add services without requiring modification to the existing infrastructure:

  • Voice over IP (VoIP) for converging voice into existing IP networks makes it possible to avoid the need to maintain a separate analog voice network. Note that a significant investment in the existing IP network is essential before VoIP services are rolled out.

  • Storage Area Networking (SAN) for transporting storage communication protocols, such as Small Computer System Interface (SCSI) and Fibre Channel over existing IP networks, uses existing high-availability networks for storage.

  • For cross-continent satellite links, 500 millisecond round trip time (RTT) is common, which can cause issues for some delay-sensitive TCP-based applications. Applications can create multiple TCP streams that increase window sizes and other TCP-based solutions to circumvent these issues. The expensive alternative is to install cross-continent submarine fiber optics.

  • Modem data compression methods increase the capacity of dedicated dialup lines.

  • Emerging Internet last-mile technologies, such as aDSL, are used to better use available frequency on existing telephone lines to support data and analog voice simultaneously.

In a similar fashion, content networking makes better use of existing infrastructure by using technology instead of brute network upgrades. Content access is accelerated and bandwidth costs are saved by copying content in closer proximity to the requesting clients. Placing content surrogates toward the edge of the network and away from the central location decreases end-to-end packet latency. Furthermore, placing content at the edge eliminates the need to transit the WAN, enabling other types of traffic to use the WAN and possibly eliminate the need to upgrade WAN capacity.

Customization and Prioritization

As you will see throughout this book, inserting intelligence and decision-making capabilities into a network is central to the concept of content networking. Adding intelligence to the network while leaving the origin servers free to provide the services they were designed to perform is vital to the enhancement of application performance. In particular, customization and prioritization offers many benefits to applications that require increased efficiency.

Two forms of customization are available with content networking: request redirection and automatic content transformation.

  • Request redirection Clients requesting content can be redirected by the content network to various versions of an application, based on the following client criteria:

    - Spoken languages and geographic locations

    - Browser/media player types and cookies

    - Phone and PDA features, such as screen resolutions and operating systems

    Request redirection is beneficial because application developers need only create multiple versions of the same content and publish them to separate application servers. The customization is transparent to clients with different criteria. The various versions appear to be the same, because the name and IP address used to access the application are the same.

  • Automatic content transformation Content transformations by the network can be transparent to the clients and origin servers. A popular example of this is transformation of content from one markup language to another. The criteria for this example can be client browser or media player type.

To provide prioritization to application traffic, you can enable various QoS mechanisms within the network:

  • Packet Queuing and Scheduling Various content networking technologies can be used to classify applications into categories. Once applications are classified, the network can use these categories to sort applications into delivery priority and queue for transmission on the link.

  • Resource Reservation Protocol (RSVP) RSVP enables an application to allocate bandwidth on the network prior to sending data. When the data is sent, the network will send the traffic based on the promised bandwidth from the original reservation request.

  • Traffic Shaping and Policing The network can restrict available bandwidth for specific applications using shaping and policing. Shaping provides soft limits on bandwidth consumption and enables applications to rise above given thresholds. Traffic policing is strict and will thus drop traffic when thresholds are reached.

Please refer to Chapter 4, "Exploring Security Technologies and Network Infrastructure Designs," for information on these QoS technologies.

Security, Auditing, and Monitoring

Given the public nature of the Internet, secure communication is a high priority for organizations with publicly available services. For any organization investing resources in developing products and services protecting them from ending up in unwanted hands are critical steps in its network design.

However, securing a network is not a trivial task. A typical enterprise network may include e-mail, database transactions, web content, video, and instant messaging. The vast number of tools available for designing and implementing network security from different vendors makes the security design task even more difficult. To protect your network, Cisco offers numerous levels of security for deploying secure content networks.

Securing Content on the Network

Cisco SAFE Security Blueprint for Enterprises discusses Cisco's security solutions in terms of practical scenarios that apply to the majority of enterprise networks. The SAFE architecture highlights every basic security measure available for Cisco networks and recommends configuration options for deploying secure networks. These recommendations also pertain to designing and deploying content networks.

On all fronts of the design, successfully securing Cisco content networks requires security at all layers of the OSI model. To reduce the chance of security problems occurring and to help detect them when they do occur, you can use TCP/IP filtering and network security auditing.

TCP/IP Filtering

Access Control Lists (ACLs) in Cisco IOS are useful for permitting or denying requests to services that are available within the network. Because standard ACLs are stateless, TCP flows are not stored in memory, and every packet is applied to the ACL regardless of the TCP flow it is a part of. On the other hand, stateful ACLs provide various means to track TCP flows to ensure that packets belong to a valid flow before filtering traffic.

An important factor to consider when performing TCP/IP filtering is whether IP subnets are used to divide servers into groups. If not, and there are no plans to feasibly subnet the IP address space, firewalls operating transparently, at Layer 2 of the OSI model, can be used instead. Layer 2 firewalls are convenient for environments in which the IP addressing scheme is not subnetted, but servers are logically grouped according to the required security policies. The server groups can be cabled to different firewall ports and filtered according to appropriate security policies. This gives the ability to statefully secure groups from one another, even if they are on the same IP subnet.

To group servers based on IP subnets in a switched environment, use virtual LANs (VLAN). You can use VLANs within Cisco IOS ACLs or firewalls to either statelessly or statefully control traffic between logical groups of clients and origin servers. To further secure traffic within a VLAN, use private VLANs (PVLAN). PVLANs prevent malicious behavior between hosts on the same VLAN, by blocking all traffic between private switch ports, and enabling only traffic that originates from these ports to traverse configurable public ports.

Network Security Auditing

Various forms of network auditing are available to designers of Cisco networks:

  • Syslog and TCP/IP connection auditing When security issues arise, audit log entries can be extremely valuable in troubleshootingeither during or after an attack. Most firewalls can log invalid connection attempts, in addition to other known anomalous behaviors. For example, when specific ACL rules are violated, IP address and port information of the source and destination hosts at the time of violation can be configured to be logged to a Syslog server.

    Additionally, denial of service (DoS) attack awareness is crucial to any content network deployment. Whether they are low bandwidth or distributed, DoS attacks can bring network operations to a halt within minutes of their onset. In the event of an attack, influence can be minimized using various design techniques and disaster recovery methods, including:

    - Manually monitoring firewall connection levels or audit log entries or both

    - Using the Cisco Intrusion Prevention System (IPS) security appliances to monitor the incoming network for DoS traffic

    - Using Cisco Intrusion Detection Systems (IDS) to verify traffic against known DoS signatures

    - Using the Cisco Self-Defending Network architecture

  • Intrusion Detection Systems Firewalls and ACLs are excellent at filtering unwanted TCP/IP activity. They are not, however, able to detect network violations at the application layer. These types of upper-layer violations can be detected using IDSs. An IDS can be inserted in to a network to listen to incoming traffic for known malicious activity. Whereas criteria for allowing traffic in to a network with ACLs are established by user-defined rules at the TCP/IP layer, the IDS bases its criteria on signatures.

    Signatures are created by Cisco and can be enabled on the IDS. Depending on the applications in the environment, certain signatures may provide more value than others. They are frequently updated as new exploits are discovered and are made available for download on Cisco.com, sometimes within hours of the discovery of the vulnerability.

  • Self-Defending Networks Instead of signature matching, analysis of the behavior of traffic avoids the need to maintain signature updates and thereby reduces operational costs and threats from unknown exploits. IDSs protect against known exploits but allow unknown attacks, referred to as day-zero attacks, to harm the network. Cisco's security prevention solutions provides day-zero protection with the expansion of its security product portfolio to include the technologies in the Cisco Self-Defending Network strategy. This strategy combines numerous security technologies together to form a hybrid security system that secures all layers of the OSI model.

Securing Client and Origin Server Content

Typically, securing network resources is only a first step in securing a content network. Intelligent systems for security vulnerability detection and counter-measuring on the client and origin server are becoming more important than ever. The origin servers must be secured from both physical and network intrusions. Physical security includes measures such as providing only key personnel with physical access to critical data center locations and limiting packet sniffing tools to specific users. Avoidance of switch monitor ports and the use of hubs, where possible, will also aid in protecting against unwanted sniffers in the network.

For server security, Cisco provides server agent software based on the Self-Defending Network architecture. This agent can identify malicious network behavior, thereby eliminating known and day-zero security risks and thus help to reduce operational costs. Cisco server agents combine multiple security functions to provide host intrusion prevention through behavior analysis, malicious mobile code protection, firewall capabilities, operating system integrity checks, and audit log consolidation.

The following additional security features are key in protecting Cisco content networks:

  • Secure Sockets Layer (SSL) SSL is used to secure traffic over a public network. Numerous content networking devices are capable of performing SSL in either hardware or software, to offload the complex SSL processing from application servers.

  • URL Filtering for Employee Internet Management (EIM) Content networking devices interact with third-party vendors to provide network-based URL filtering. With EIM, transaction logs track which users accessed which sites and can help monitor employee usage of the Internet.

  • Virus Scanning Content networking devices also interact with third-party vendors to provide network-based virus scanning. You should consider employing a network-based virus scanner to ensure that viruses are detected and removed before entering your e-mail server or client systems.

  • Authentication, Authorization, and Accounting (AAA) Methods for the authentication process when users request objects from a content network are varied. Insight in to which environments are most appropriate is a valued asset in designing secure content networks. Given security issues related to malicious user logins in a corporate environment, it is highly important to have HTTP and RTSP user authentication and URL filtering in a content networking deployment. You can also use accounting to help provide an audit trail for logins into a device, such as a router or switch, indicating what commands were issued and by whom.

Monitoring, Administration, and Reporting

Monitoring the health of the network and origin servers is important to ensure that application information is constantly being transported reliably. Various network and application monitoring tools that are available for use in monitoring a content network are described in the sections that follow.

Network Monitoring and Administration

Simple Network Management Protocol (SNMP), Syslog, and Network Time Protocol (NTP) are available for network monitoring.

SNMP is a standard messaging protocol for polling and receiving traps from network devices. SNMP managers can poll devices proactively for network information, such as bandwidth and CPU usage, to provide alerts in the event of receiving abnormal data. Historical archiving of polled data provides valuable information for administering and troubleshooting a network.

SNMP managers can also intelligently parse incoming traps from network devices and take action or recommend potential solutions. Programmatic interaction with SNMP managers is an invaluable means to provide intelligent automatic recovery in the event of failure. For example, most SNMP managers can run a program when an event is triggered from a trap received from a network device. The program can perform actions such as sending an e-mail to any individuals responsible for the network device, rebooting the device, or other actions that are pertinent to the event.

Syslog is a protocol used to capture events from network devices. Events such as ACL hits, network logins, packet loss, and interface and routing protocol transitions can be generated by network devices and sent to Syslog servers within the corporate LAN. These logs can then be used for post-mortem problem determination, to determine what failed, why it failed, and how the system can be designed to better prevent a catastrophic outage in the future. SNMP traps and Syslog are similar in that they both provide event-driven alarms when an error occurs in the network device.

NTP is necessary in secure environments to ensure that all time clocks in the network are synchronized. This way, log entries from different yet dependant devices can be traced precisely during the troubleshooting process.

Securing the administration of content networking devices is very important, both in-band and out-of-band.

  • If in-band management using Telnet or HTTP over a public network is available, cleartext passwords can potentially be read if intercepted. Secure Shell (SSH) can be used as an alternative to Telnet, and SSL in the place of HTTP, to provide encryption of the administration data that will traverse a public network. Additionally, the SNMP standard provides a means to secure the administrative passwords and the integrity of SNMP messages in version 3 of the protocol.

  • Out-of-band serial management is a secured network administrative tool. As long as the passwords are kept secret, they cannot be intercepted in transit over a private dial-up connection, or a direct console connection into the network device.

Application Monitoring and Administration

Application monitoring is performed separately from network monitoring. How closely monitoring is performed depends on the criticality, performance, and load of the server.

Most third-party application monitors have the ability to

  • Monitor availability and performance quality for applications, either in- or out-of-band.

  • Send alerts when application failures occur, or when thresholds are exceeded.

  • Recover failed applications automatically.

  • Provide historical reports to graph the behavioral trends of the applications.

In-band application monitors simulate valid requests to the server, check the responses, send alerts, and optionally perform actions to aid in remedying or troubleshooting the issue. The types of requests and responses depend on the applications being monitored. Possibly one of the most useful results of these tests is the measurement of latency. Because many applications are sensitive to latency, monitoring this parameter enables a Network Operations Center (NOC) to take action before clients perceive any latency issues associated with the particular application.

Out-of-band application monitoring is similar to network out-of-band monitoring in that it is used to monitor and recover servers over an interface other than the one providing the content to clients. The advantage is that, even when completely down, the origin server can still be monitored and recovered. The drawback is that often additional hardware is required.



Content Networking Fundamentals
Content Networking Fundamentals
ISBN: 1587052407
EAN: 2147483647
Year: N/A
Pages: 178

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net