Traditional High AvailabilityThe Whitepaper Approach


A wide variety of products are on the market today that provide high availability. Most of these utterly confuse the load balancing they provide with the notion of high availability. It is important to understand both heads of this beast (high availability and load balancing), so that you aren't duped into buying a safety blanket that won't keep you warm.

A traditional hardware high availability device provides a mechanism for advertising a virtual service (either UDP/IP or TCP/IP) to end-users and distributes the requests to that service to a set of machines providing that service (shown in Figure 4.3). In the case of web services, the device listens on an advertised IP address on port 80, and any web requests that arrive on that IP address are distributed over a set of locally configured web servers.

Figure 4.3. A load balancer advertising a virtual IP for real servers.


The device here has the capability to run service checks against the web servers to determine whether they are available to service requests. The service checks on these devices can range from simple ICMP pings to full HTTP requests with content validation. This means that if a machine fails, the device will eliminate that machine from the eligible set of machines over which it distributes requests. As long as a single machine is alive and well in the pool of servers, the service will survive. Sounds perfect, right? Not really.

Let's investigate the positive and negative aspects of using a dedicated high availability/load balancing device. In Figure 4.3, you'll notice that the high availability/load balancing device is clearly a single point of failure.

The fact that almost every large architecture has hardware load balancers deployed should tell you that they have powerful selling points. It is important to keep in mind that selling points aren't buying points and that some of the positive aspects listed may simply not apply to your architecture.

  • These devices are widely used and relied on at some of the largest Internet deployments in existence. This means that they work. They are tried, true, and tested. It is also unlikely that they will fail.

  • As hardware devices specifically designed for load balancing web (among other) services, they are high performance and scale well. Some devices boast being capable of managing 15 million concurrent connections and more than 50 gigabits/second sustained throughput.

  • Application servers are no longer a single point of failure. In other words, the service can survive N-1 application server failures. If one remains up, things keep on ticking.

  • Services distributed across multiple machines can still be advertised as a single IP address to the world. The ramifications of not behaving this way are discussed in detail in the later section of this chapter called, "High Availability Rethought (Peer-Based)."

Of course, with the good comes the bad. Hardware high availability/load balancing devices have some glaring disadvantages:

  • The device here costs money and often a substantial amount of it. Its mere existence introduces architectural complexities. Consider first the ongoing maintenance and support costs. Second, to have a truly sound development and staging environment, it must match your production environment as closely as possible. This results in duplicating this architectural component in those environments.

  • By deploying a single high availability/load balancing device in your architecture, have you made the system fault tolerant? The service will survive the failure of any one or N-1 of the web servers in the architecture. However, we now have a new component (the high availability/load balancing device), and the singular failure of that component will clearly cause a service outage. The solution? Buy two. However, it is still disturbing to implement a solution only to find that your high availability/load balancing device needs its own highly available solution!

Surveying the Site

Two main protocols are used for high availability and failover, and all use the concept of heartbeats. Virtual Router Redundancy Protocol (VRRP) is a protocol under draft by the IETF to be the standard protocol for device failover between routers on the Internet. Cisco has a proprietary implementation of the same concepts called Hot Standby Routing Protocol (HSRP) that has been deployed on thousands of routers and switches across the Internet and in corporate environments.

For classic computing systems, vendors and implementers have chosen a different path with the same ultimate goal of transparently migrating a service from a machine that has failed to another available machine. There are several open and closed solutions to server failover:

  • Veritas Cluster Server

  • SGI's Linux Failsafe

  • LinuxHA

  • CARP (OpenBSD's Common Address Redundancy Protocol)

All these products purport to offer solutions to the single point of failure. Saying that one is better than the other would be unfaithful to their intentions.

There are a slew of issues to deal with when tackling high availability, including shared storage, application awareness of system failures, and responsibility migration. Some of these products attempt to do everything for you, whereas others provide the fundamental tools to grow the solution that is right for you.

In the narrow world of the World Wide Web, shared storage isn't a common need. Typically, web services are horizontally scalable and work as well as separated distributed systems. This alleviates the need for shared storage and the need for application failover because the application is already running on all other nodes. Web systems that do rely on shared storage often find network attached storage (such as NFS) to be more than sufficient. Routers are in a similar position; routers route and post-failure, there isn't a tremendous amount of logic or procedure involved in enabling services on a router aside from assigning it IP addresses and making it participate in whatever routing negotiations are needed.

Databases, on the other hand, are much more complicated beasts. The classic path-of-least-resistance approach to making databases highly available is through shared attached storage and a heartbeat between two machines. We will go into this in Chapter 8. Each product has its place.

Pouring ConcreteFoundry ServerIron

Instead of referring to the device and the service, let's look at a concrete example. To focus narrowly on high availability, we will investigate a static website with no complicated content synchronization or session issues. This short example is not a "guide to the Foundry ServerIron"; for that, you should download the product documentation from the vendor. Here we intend only to demonstrate purpose and placement as well as overall simplicity.

www.example.com is a site that serves web pages over port 80 from the IP address 192.168.0.10. Our goal is to ensure that if any single architecture component we control fails, the service will survive.

Let's first look at a simple picture of two machines (www-1-1, www-1-2) running Apache with IP addresses 10.10.10.11 and 10.10.10.12, respectively. Out front, we have a Foundry Networks ServerIron web switch (ws-1-1) providing high availability. The ServerIron has a clean and simple configuration for "balancing" the load across these two machines while remaining aware of their availability:

server predictor round-robin server port 80   tcp server real www-1-1 10.10.10.1   port http url "HEAD /" server real www-1-1 10.10.10.2   port http url "HEAD /" server virtual www.example.com 192.168.0.10   port http   bind http www-1-1 http www-1-2 http 


This configuration tells the ServerIron that two real web servers are providing service over HTTP and that they should be considered available if they respond successfully to a HEAD / request. A virtual server listening at 192.168.0.10 on port 80 should send requests in a round-robin fashion back to those two configured real servers. This configuration can be seen with the additional fault-tolerant switching infrastructure shown in Figure 4.4. (This is a costly evolution from Figure 4.3.)

Figure 4.4. Network architecture with a single point of failure.


Note

In practice, this routing configuration is somewhat involved and is well outside the scope of this book. The configuration of adjacent networking devices and the overall network topology can drastically affect the actual implementation. Refer to the appropriate vendor-provided configuration guide.


However, although we have solved the issue of service vulnerability due to the loss of www-1-1 or www-1-2, we have introduced a new single point of failure ws-1-1. We are no better off. You might argue that a black-box networking device is less prone to failure than a commodity web server. That may be true, or it may not. You can run it in production and find out, or you can work the single point of failure out of the architecture; it's up to you.

So, let's take the next step and build some fault tolerance into the web switch layer. Foundry, as do all other vendors, provides a built-in failover solution to accompany its product.

We add another identical ServerIron named, suitably, ws-1-2. We have Ethernet port 1 on each switch plugged in to our egress point to the outside world, and we have the switches cross-connected on Ethernet port 2. We configure them for hot-standby as follows.

On ws-1-1 and ws-1-2:

vlan 2 untag ethernet 2 no spanning-tree exit server router-ports 1 server backup ethernet 2 00e0.5201.0c72 2 write memory end reload 


In a play-by-play recap, the first stanza places the second Ethernet port on the switch into a private VLAN, so the switches can chat together in private. The second stanza points out the Ethernet port connected to the router and configures hot-standby on the Ethernet port 2. The last stanza simply saves and reboots.

Presto! We now have failover on the web switch level, but how does the architecture change? Figure 4.5 shows the architecture recommended by many appliance vendors in their respective product documentation.

Figure 4.5. Network architecture with no single point of failure.


Now the web service can survive any single component failurebut at what cost? The architecture here has just grown dramatically; there is now more equipment to provide high availability than is presently providing the actual service. We have grown from the architecture depicted in Figure 4.4 to the one illustrated in Figure 4.5, effectively doubling the network infrastructureall to make two web servers highly available. Additionally, these hardware high availability/load balancing devices carry a substantial sticker price and add to that the ongoing support and maintenance costs.

Although this solution may make more financial sense when placed in front of 100 or 1,000 servers, the fact that it makes little sense in the scaled-down environment should be duly noted. Part of scalability is seamlessly scaling down as well as scaling up.

Remember that the focus of this chapter is high availability, not load balancing. This architecture needs only high availability but is paying for both. This example isn't to discredit ServerIrons or hardware high availability/load balancing devices, but rather to illustrate that they aren't always the right tool for the job. In the end, most of these devices are more useful in the load-balancing realm, but we'll talk about that in Chapter 5.

If the architecture looks sound, but the price tag does not, several commercial and noncommercial alternatives to the Foundry products described in the previous example are available. Freevrrpd is a free VRRP implementation, and linux-ha.org has several projects to perform pairwise failover. Due to concerns that VRRP is encumbered by intellectual property claims (and for the sake of building a better mousetrap), the OpenBSD team has developed a similar protocol called Common Address Redundancy Protocol (CARP). These are all reasonable solutions because load balancing was never the goalonly high availability.

Additionally, some cost cuts can be made by collapsing the front-end and back-end switches onto the same hardware. However, this switch reuse often isn't applicable in large architectures because of the different demands placed on the front-end and back-end switches. Front-end switches are often inexpensive because they are simply used for connectedness of routers to high availability/load balancing devices and firewalls, whereas the back-end switches are core infrastructure switches with heavy utilization (and a hefty price tag).




Scalable Internet Architectures
Scalable Internet Architectures
ISBN: 067232699X
EAN: 2147483647
Year: 2006
Pages: 114

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net