Network Infrastructure Availability


I've discussed network availability in a fair amount of detail in this book. What I haven't discussed or focused on, however, is general network infrastructure ” switches, routers, and multiple network interfaces from the servers. In the following sections, you'll look specifically at firewalls and load balancers and some methods to improve their availability.

Overview

Firewalls and load balancers are critical components to any online-based, business-critical system. Firewalls are key to security, and load balancers are key to availability. Ironically, both of these components can form a single point of failure in your environment.

If your firewall infrastructure must be traversed for each data packet entering or exiting your WebSphere environment, what happens if the firewall goes down? It's the same for load balancers ”if your load balancer becomes unavailable, what happens to traffic flow? Some load-balancing appliances have support to fail into a limp home mode where they revert to a dumb hub or dumb switch that simply passes packets back and forth, but this doesn't work well if you have complex Virtual Local Area Network (VLAN) or private IP configurations set up within your environment.

You'll now see some ways to ensure availability of your WebSphere environment with firewalls and load balancers.

Firewalls

Firewalls are typically hardware appliances or some form of software service running on a server. Many medium and large businesses will operate a fully formed Demilitarized Zone (DMZ) that operates between two or three firewalls.

What happens, however, if your firewall becomes unavailable? Does traffic to your WebSphere environment stop? There are a number of schools of thought on this scenario: Some people believe that if the firewall ceases to work, then so be it. You lose processing availability of your applications, but it's better to be safe than sorry. I don't agree with this approach. If you've spent a zillion dollars on building up a WebSphere environment that's highly available, you don't want a single point of failure such as a near-commodity network device (your firewall) to be the cause of your application going down. Therefore, the simplest way to move forward and provide high availability is to have a standby or active-active firewall configuration that has two or more firewall devices at your network borders.

Consider an example where there are two firewalls at the border of a DMZ. These two firewall devices are routing traffic from frontend routers or switches (in front of the firewall devices), which are in turn routing traffic via global routing protocols, such as Border Gateway Protocol (BGP).

If one of the border routers goes down, BGP routing would redirect all traffic to the secondary router.

Most of the high-end switch and routing device vendors have their own form of internal high-availability capabilities that could also be used in a situation to make switches such as these highly available. Cisco, for example, has a technology called Hot-Standby Routing Protocol (HSRP).

Cisco PIX firewalls, for example, are hardware-based appliances that support HSRP capabilities.

These types of firewall devices have an inbound and an outbound interface (or several) and simply interrogate traffic as it traverses, denying and accepting traffic based on defined rules.

If the firewall device becomes unavailable, the frontend routers would detect that a firewall device had gone down ”because of network failures being returned from the broken or downed firewall ”and redirect it to the remaining firewall device.

If you're using software load balancers that operate on servers, then you can take advantage of IP takeover-based clustering using one of the clustering technologies discussed in this chapter and earlier chapters.

This provides a highly available firewall environment but does have a limitation in that it's a failover configuration. In times when the failover is taking place, there will be a delay while the failover takes place and routing resumes on the new master firewall host.

You can achieve this in many ways, and dozens of firewall vendors and products support these types of situations. This example should provide you with some ideas when considering your design.

Load Balancers

Load balancers are another common component within the online world of WebSphere-based applications. As you've seen, load balancers provide the ability to distribute traffic between WebSphere application server hosts as well as many other forms of application hosts within computing environments.

Load balancers themselves , like firewalls, are prone to failure. Some load-balancer vendors, such as firewall vendors, provide limp home modes for their products that allow a load balancer to revert into a switch/hub or straight-through configuration. This is a "better than nothing solution," but it doesn't help those sites that are more complex. Also, as discussed before, they may use more complex network architectures such as VLANs, Network Address Translation (NAT), and so forth. These technologies will typically not support a pass-through configuration from a load balancer when or if they fail.

The best way to combat load-balancer availability is to load balance the load balancer! It sounds like overkill, but the approach is the same as that for firewalls. If your budget allows, high-end routers and switches support load-balancing capabilities. In these cases, your border routers or switches would perform the load balancing for you to your firewalls, and they may provide inner load balancing within your secured area or in the inner DMZ.

I tend to approach load-balancer and firewall implementations the same way. I picture the method of routing traffic to each tier within the firewall (which will consist of firewalls, load balancers, switches, and routers) like sifting sand. The sifter is cone-shaped with the border routers or switches being the large cone end and the inner DMZ firewall (between the DMZ and your application's servers) being the pointy end.

The large cone section where the border routers are allows the coarse-grained traffic to flow through, and the pointy end only allows fine-grained traffic to flow through. The coarse-grained traffic is routed via coarse-grained routing mechanisms. In this case, it may be BGP or Open Shortest Path First (OSPF). As the routing of traffic nears the inner DMZ firewall, the routing mechanism is all point-to-point IP (using VLANS) or some other form of private IP routing.

Your approach, therefore, may look something like this:

  1. Traffic is routed through the Internet (or intranet) via BGP or OSPF to the BGP/OSPF-selected router/switch on the border of the firewall.

  2. One of the two frontend routers receives the packets and sends them to either of the firewalls based on basic weighted routing rules (simple load balancing).

  3. The firewall device interrogates the packets and sends them out the egress interface to standard load balancers (lb1 or lb2).

  4. The load balancers send the traffic to one of the servers in the Web server farm for processing.

  5. One of the servers in the Web server farm processes the request and then sends the result into a backend WebSphere application server via the rules defined in the HTTP plug-in file (load balancing happens at this point for the backend WebSphere application server's selection via the plug-in).

  6. The Web servers send the request out to the backend using standard weighted metric routes on each server, or using routing rules, via inner-router-01 and or inner-router-02.

  7. Either of these routers then sends the request onto the WebSphere application servers via their own local switches.

This is a complex area of networking, and these short guidelines should be only taken as that ”guidelines. It's important, however, to consider these issues of WebSphere environment architecture because as a WebSphere manager, architect, or senior developer, you'll need to understand the end-to-end picture of your environment.

Some other considerations of your firewall and load-balancing environment are as follows :

  • Additional firewall tiers may be required if security is a concern. Options exist for the additional firewall to be placed between the Web servers and the inner routers or on the egress-side inner routers before hitting the WebSphere application server's local network.

  • Investigate your vendor's end-to-end capabilities. It's possible that the vendor can provide an end-to-end model for your highly available firewall needs.




Maximizing Performance and Scalability with IBM WebSphere
Maximizing Performance and Scalability with IBM WebSphere
ISBN: 1590591305
EAN: 2147483647
Year: 2003
Pages: 111
Authors: Adam G. Neat

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net