Different Approaches


When building a content switch there are typically two approaches: design one from the ground up, or use existing PC and network processors on which to base the platform. Let's look at the earlier methods of content switch design, before network processors came to the forefront.

Some of the early content switches were basically PCs with some form of operating system, usually an open source operating system (OS), and had multiple NICs installed. These devices would manipulate data in the application running on top of the OS, and many manufacturers had initial success with this approach because load balancing requirements were typically at Layer 4 only, high-speed links were rare, and content switching was still in its infancy.

As content switching became more important and more widely spread, PC-based architectures were often left wanting when processor- intensive tasks and application support were required.

Other manufacturers built their content switches from the ground up using purpose built ASICs and proprietary operating systems. This is obviously a much more expensive (and in the early days could be conceived as a more risky) approach, but one that if it worked allowed for large differentiation. Some manufacturers tried this and failed, and others succeeded and forged the way in content switching. Let's now look at PC-based, or central CPU-based, content switches versus ASIC-based content switches.

PC Architectures

Using a central processor to run intensive tasks brings with it the primary limitation of not having the ability to scale as we add multiple services and tasks to the device. Most content switches today have the ability to perform the following tasks:

  • Server load balancing of any TCP or UDP port

  • Global server load balancing

  • Firewall load balancing

  • Web cache redirection

  • Application redirection

  • SSL offload

  • VPN load balancing

  • WAN link load balancing

  • Streaming media load balancing and cache redirection

  • Intrusion Detection System (IDS) load balancing

  • Layer 7 load balancing

  • Wireless application load balancing for mobile services

While this list is not exhaustive, we can see that many applications can be configured within the content networking arena. The major issue with all central CPU-based designs is that the more applications that are enabled on a content switch, the more overhead is placed on that single, central CPU. Moreover, we should also remember that content switches are session-based switches; in other words, they are interested in sessions, not each and every packet. They need to maintain session information and group hundreds of packets as a single session. This may not make a difference in low usage sites, but will be an issue in large, heavily accessed sites. Let's also not forget that as we start to look at Layer 7 information, which has no fixed start and end point, the overhead placed on a single CPU is immense.

Another design issue often overlooked is the throughput of the bus between the ingress and egress ports and the CPU. With many-gigabit ports all receiving traffic, it is imperative that the CPU can service these requests as quickly as possible. Having a bus that can service a PCI-based CPU is okay in a computer, but with gigabit ports the bus (and CPU) need to be able to handle the aggregate throughput of the switches ports. The inability to do this limits the performance of PC-based designs.

To overcome this, PC- or central CPU-based designs try to offload as much processing of Layer 2 and Layer 3 to the hardware on the ports themselves . This can relieve the CPU from having to perform all tasks, although the initial decisions still need to be handled by the central CPU. This then allows the CPU to perform the Layer 4 through 7 tasks, which as we have discussed are far more processor intensive. Regardless of how the tasks are distributed, the content switch is only as fast as the CPU. The more sessions it needs to manage or the more applications that are configured, the more processing it requires, and it will obviously reach a saturation point. It is here that distributed architectures add value to the content switching arena. We should point out that with the increase in processing power and the ability to run multiple CPUs in a single device, the bottleneck is shifting. However, running this type of PC-based architecture still has potential for performance degradation. Most manufacturers are moving away from PC-based architectures, but this can be and often is a long and hard road to travel.

ASIC-Based Architectures

ASICs have traditionally been associated with high-speed performance, and that is typically what content switch manufacturers have managed to achieve when using this technology. By designing ASICs to perform the traditional Layer 2 and 3 functionality and the layer 4 functions leaving the intense Layer 7 applications to use software. By only having to use software for Layer 7 based functions, performance, session setup, and so forth can be maintained as additional users or applications are activated.

Obviously, by using ASICs, the need to ensure that the code is correct is crucial, and the need to be able to make changes to allow for new features is a necessity. What typically happens is that the majority of Layer 4 functions are programmed into the ASIC, and as new features are added or designed, the ASIC is either rewritten (if using programmable ASICs) or these features are offloaded or handled by software running on the content switch. This method enables the switch to cater to and grow with new features. Then, as most ASICs are respun every two years or so, these well-known and used Layer 4 and even Layer 7 functions and features can be programmed into the new ASIC.

This method is obviously more costly and can be fraught with difficulties if the ASIC is not well designed. Without a doubt, some startup companies using ASIC-based technologies have not managed to complete their projects because of unforeseen errors and design problems, and have subsequently been forced to shut their doors. Others have made it, but too late, and have not been able to capitalize on the market momentum and have also had to close. Others were first to market, had excellent ASIC design, and are market leaders .

Like all things in technology, changing designs and concepts will mean that manufacturers will shift from one technology to the next. Those manufacturers who embrace the next wave of hardware will be able to change with the times, offering top quality services and features regardless of hardware.



Optimizing Network Performance with Content Switching
Optimizing Network Performance with Content Switching: Server, Firewall and Cache Load Balancing
ISBN: 0131014684
EAN: 2147483647
Year: 2003
Pages: 85

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net