Why Load Balance Servers?


There are often many reasons for using content switching to implement server load balancing solutions, and it's not our intention to cover all of those reasons here. The main contributing factor over the past few years has been the explosion in Internet use and the consequent need for bigger, faster, and more reliable Web sites. While many of the examples given in this book refer to protocols and applications that have their roots in the Internet such as HTTP, HTTPS, and SMTP, it is not true that implementations of server load balancing are limited to these areas. Indeed, anywhere servers exist, it's increasingly likely that there is also a requirement for some form of server load balancing.

While early users of the Internet came to often expect a slow and frustrating experience when browsing the Web, today's user is blessed with ever-increasing access speeds and technologies that in turn drive their expectations of how a Web site should perform. Modern networking has often become a battle of moving the bottleneck around the network with increasing speeds and feeds available to address the issues at all points. While historically server administrators might not have concerned themselves with performance and scalability beyond the bounds of a single box, the ever-increasing performance of networking technology means that the bottleneck can quickly end up being the application or server.

One other key driver of the Internet Age is the globalization of customer base. With a Web presence, an organization need never close its shop doors to customers and can operate 24 hours a day, 7 days a week. This again introduces an interesting challenge to the server administrator in terms of maintenance and availability.

To this end, the implementation of server load balancing can provide advantages such as:

  • Scalability: An application need no longer be bound by the performance of being hosted on a single server. The ability to organically grow the resources of an application N fold with the implementation of further servers means that server administrators need not be faced with justifying the expense of implementing large scale servers, but can provide a plug-and-play structure potentially using smaller physical servers with better economies of scale.

  • Reliability/redundancy: The action of putting all your eggs into one proverbial basket comes with inherent risk. No matter how reliable a single server is made to be, a failure in one server will effectively bring down the entire application. Spreading the load over a number of physical servers reduces this exposure enormously.

  • Operability and maintenance: With a 24 x 7 availability expectation, implementations of server load balancing can help with providing an environment in which servers can be removed from operation for scheduled maintenance such as operating system upgrades, hardware upgrades, and so forth. The ability to roll out new versions and builds of applications while retaining control of when the switch-over occurs can be a powerful use of server load balancing.

The Alternatives to Server Load Balancing

Prior to hardware- or network-based server load balancing technologies, multi-server implementations have used other approaches to increase the availability and scalability of server farms. Typically, such implementations would take the form of server clustering for server load balancing. Such clustering technologies often rely on technologies such as multicast to achieve similar results if somewhat less reliable and, as we've seen in other content switching applications, scalable.



Optimizing Network Performance with Content Switching
Optimizing Network Performance with Content Switching: Server, Firewall and Cache Load Balancing
ISBN: 0131014684
EAN: 2147483647
Year: 2003
Pages: 85

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net