Policy-Based Firewall Load Balancing


Probably the most important reason why we load balance firewalls is to provide performance. While resilience is key as well, it is the performance increase that network administrators are looking for. One of the other benefits that can be achieved when deploying firewall load balancing is to make use of the intelligence within the content switch and provide a policy-based, load balanced solution. This is not a requirement for firewall load balancing, but one that is an added bonus. It should be pointed out that this feature is not common among all content switch manufacturers, so if this is something you require, ensure that the content switch you deploy can support this.

Policy-based firewall load balancing allows the network administrator the ability to create a set of rules that can be deployed on the content switches, ensuring that certain traffic types traverse certain firewalls. Figure 9-7 illustrates this.

Figure 9-7. Policy-based firewall load balancing showing different firewalls handling different traffic types.

graphics/09fig07.gif

This is done using redirection filters. By creating a redirection filter for HTTP, FTP, SMTP, and so forth, it can in turn be associated with different groups of firewalls. This gives a very granular level of control and allows firewalls to be configured for specific purposes. For example, all firewalls could be configured to allow inbound and outbound HTTP, but only a select few could be configured for FTP and SMTP. Whatever the requirement is, this feature can add a level of management and control of firewalls not anticipated. Organizations could allow a certain department to manage its own pair of firewalls and only allow traffic for that department's network to be load balanced through that pair of firewalls. Whatever the application, the ability to intelligently manage the session is key and it is this intelligence that makes up the fundamental architecture of content switches.

Topology Examples

Firewall load balancing brings with it its fair share of design and deployment models. For us to cover them would require a book in its own right, but we will briefly describe some of the most common types, showing briefly the advantages and disadvantages of each

Multisubnet Implementations

Creating subnets between the content switches and the firewalls allows an easy-to-understand and easy-to-deploy configuration. This not only assists with troubleshooting, but also makes deploying further firewalls very simple. The reason for this is that each link between the content switch and the firewall is its own subnet. Static routes need to be configured on each content switch for each route, which will allow the health checks to function. Figure 9-8 shows a four-subnet design.

Figure 9-8. Four subnet firewall load balancing design.

graphics/09fig08.gif

One key advantage of using subnets is that it can allow for comprehensive security. The reason for this is that the subnets do not have to be in public address space. The firewall is still totally accessible and it is the content switches that ensure that the traffic reaches the firewall. The only reason you would not have private address space is if the firewall needed to be managed over a public network or if NAT to a public address was required. If this is a requirement, then only the dirty side subnets need to be public; the clean side can remain in private address space.

One of the requirements for firewall load balancing is the need for subnets, and if these subnets need to be public, then the more firewalls that are added, the more valuable IP addresses are required to achieve this. To overcome this, some content switch manufacturers allow for a two-subnet firewall load balancing sandwich. This unique approach requires that multiple interfaces are configured on the content switches, but these interfaces need to be within the same subnet. Most Layer 3 devices will not allow this. Dependent on the content switch deployed, the ability to create an interface within a subnet is achieved by ensuring that all additional interfaces created use a /32 mask. This means that a switch could have the following interfaces as illustrated in Figure 9-9.

Figure 9-9. Two subnet firewall load balancing design allows consolidation of IP address space.

graphics/09fig09.gif

If a two-subnet design is required, you need to ensure that the content switch used can support this type of configuration.

Using Layer 2 Switches to Increase Flexibility

All of the diagrams shown in this chapter depict a pair of content switches on the clean and dirty side of the firewalls. The reason for this is that by deploying a pair and running VRRP, a truly resilient network can be achieved. If more than one firewall is required, then typically the organization deploying them is serious about resilience, and it makes sense to provide resilience in the content switches as well.

One of the reasons why we have not included Layer 2 switches in the design is because they are typically not needed. Providing these switches can increase site flexibility and provide good points for monitoring and troubleshooting. It must be remembered that when layer 2 switches are deployed, STP is often configured. To increase the fail-over times, STP should not be active, and care should be taken to ensure that STP is not included in your design.

Using Layer 2 switches allows for a single content switch to fail and still allow for both firewalls to be active, or for a firewall to fail and have both content switches active. Moreover, if we had an uneven number of firewalls configured as per Figure 9-10 without Layer 2 switches, we can see that a failure of content switch could affect 66 percent of the site. Layer 2 switches would minimize this as illustrated.

Figure 9-10. Uneven firewall availability in a content switch failure scenario with and without Layer 2 switches.

graphics/09fig10.gif

As some content switches allow for multiple services to be configured simultaneously , Layer 2 switches allow for these additional appliances to be added with minimal impact to the content switches and maximum effect to the network. In addition, as content switch technology is more costly than Layer 2 devices, it is not always a requirement that servers, caches, SSL offload devices, and so forth need to be directly connected to the content switch. Deploying Layer 2 devices can decrease costs as well as increase resilience and functionality.

Layer 2 Firewalls

Load balancing Layer 2, or transparent bridge mode firewalls, are a challenge in their own right, but can be very effective in not only providing excellent performance but also increasing security. The reason they are more difficult to configure is that the static routes are to the opposite side content switch and not via the firewall, as the firewall is not part of the next hop as is the case with routing firewalls.

While the configuration may be more complex, Layer 2 firewalls also bring with them different deployment issues. The most common is the inability to perform NAT functions. This inadequacy is not a reason to not deploy them, just something to be aware of. NAT can be configured on another layer of firewalls or in the router, or content switch dependent on what is required. Layer 2 firewalls offer high performance and are often the firewall of choice when protecting time-sensitive applications such as streaming media or VoIP. However, with the advent of multigigabit firewalls with very low latency (often in the microsecond range), this requirement is not as critical. Much of the market for these firewalls is in areas where network redesign is an issue or where firewalls need to be as undetectable as possible.

Layering Firewalls for Greater Security

While load-balancing firewalls is a very common practice in today's society and is deemed an adequate level of protection, there are some occasions when the need to add addition layers of security is paramount. This is often found in financial or government installations, but is obviously not restricted to these organizations. What layering provides is a double layer, where if the outer firewalls are breached there is still adequate protection on the internal layer(s). We can see in Figure 9-11 how this is implemented.

Figure 9-11. A layered firewall load balancing sandwich providing additional protection.

graphics/09fig11.gif

This has different complications, but in our example, the second layer provides protection of the "crown jewels " ”the database. This model is often implemented to allow developers and server administrator's access to the site, but still ensure that security of the main data vault is not breached. Some implementations may just be two layers of load balanced firewalls directly on top of each other, as is shown in Figure 9-12, while some will allow for server load balancing and other services such as SSL acceleration or WCR.

Figure 9-12. Two- tier firewall load balanced sandwich.

graphics/09fig12.gif

Often, the thinking behind a layered firewall approach is to use two different firewall manufactures for each layer so that if a security hole is exposed in one operating system, the whole site is not immediately vulnerable. This is an excellent method for protecting a site but is only for those with large budgets and an absolute desire for bulletproof security.

This kind of design also brings with it a more complex and difficult to troubleshoot environment. Care should be taken when designing or scoping this type of installation, and a comprehensive understanding of firewall load balancing is essential to the end user . In addition, understanding the constraints and flexibility of the proposed content switches is also key, as all content switches are not created equal when it comes to firewall load balancing.

Using the Content Switch for Additional Protection

Content switches can offer additional protection to sites should this be desired. While most leading switches run Layer 4 processing in the ASIC, it makes sense to use the intelligence and speed of these devices to further increase packet inspection. By creating deny filters for all unwanted sessions based on TCP port, source or destination IP address, or whatever criteria is required, these switches can halt sessions destined for the internal network. This not only decreases the overhead that is placed on firewalls using traditional software that require CPU cycles to run, but also improves site performance as a whole. As security administrators need to be aware of all types of attacks and threats to their site, deploying filters in the content switches will not appear in the firewall logs. To overcome this, it is very easy to create a syslog server or some logging mechanism to catch all breaches of the configured filters or access control lists on the content switch in order to determine the types of attacks or threats that are occurring.

It should be pointed out that content switches, while excellent at providing security if required, are not firewalls. Their function is to intelligently manage sessions, but above all, their main function is to forward traffic as quickly as possible. Therefore, using a content switch is excellent for additional protection, but it should never be the only protection. Firewalls are the devices specifically designed for this.

Adding Demilitarized Zones (DMZs)

The need for secure zones or DMZs is normally dependent on business requirements or even security regulations within an organization. DMZs allow organizations to protect services and data from both internal and external access. The reason why this is so popular is that often information you want accessed by external users, while you want it protected, you do not want it in your internal network. This would be too big a security risk. DMZs offer the perfect solution for this and allow both internal and external protection.

While this works perfectly and solves security issues, it adds a bit more complexity to our firewall load balancing design. The more DMZs that are added, the more paths need to be created to ensure traffic flow. Achieving this can be quite involved and needs a clear head to ensure that the design functions.

The need to communicate between DMZs is often a requirement. To achieve this, the packet needs to traverse the firewall to ensure that the correct security policy is invoked. If you are using a tagged VLAN to provide differentiation between the subnets, it is important to ensure that the firewall selected can receive a packet on an interface and route it back out the same interface while still validating the security policy. This can be a limitation on some firewalls. This concept, using DMZs and tagging is shown in Figure 9-13.

Figure 9-13. VLAN tagging and DMZs.

graphics/09fig13.gif

As we discussed earlier in this chapter, creating the path is paramount in ensuring that firewall load balancing works, and while each vendor has their own method the bottom line is the same ”a misconfigured path will break your solution.

Using the same method we used earlier for creating paths will add additional real servers (associated with IP interfaces) for every DMZ added. While this is fine, it can become very complex very quickly. This is not a reason to shy away from using multiple DMZs, but rather a reminder to ensure that there is a thorough understanding of the design. If we look at Figure 9-14 we can see that adding multiple DMZs has increased the potential paths, and the need to create redirection filters forwarding traffic to the correct DMZ is paramount. Typically, in a straight through design, redirection filters used the "any any" policy. In other words, redirect all inbound traffic to the firewalls and let them do the routing. Likewise for traffic exiting the network. By now we are sure you have figured out that the redirection filter is not dependent on the end servers, but purely a mechanism by which to send traffic into a firewall with its SIP and DIP intact. So, by creating multiple DMZs, it would actually make no difference what the end point of the redirection filter is, but rather that the redirection filter forced the traffic to a firewall. The firewall will ensure that the traffic is routed to the correct destination. However, dependent on content switch used, health checks also rely on filters, so ensuring that all real servers are checked and available is a necessity for paths to operate . Ensuring filters are created to "steer" traffic to the other content switches will ensure that the health checks are performed correctly and that the network is available for use.

Figure 9-14. Multiple DMZs.

graphics/09fig14.gif

DMZs do not have to be implemented using additional content switches, but this is often used to provide resilience. It is often simpler to design and deploy and makes troubleshooting that much easier. In addition, dependent on the firewall used, this can be an absolute necessity, as routing through the same interface is not always supported. Using multiple content switches will depend entirely on your configuration.



Optimizing Network Performance with Content Switching
Optimizing Network Performance with Content Switching: Server, Firewall and Cache Load Balancing
ISBN: 0131014684
EAN: 2147483647
Year: 2003
Pages: 85

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net