Scaling Web Services


For many companies, Web services see more traffic than any other single system. As the company grows its identity grows. As its identity grows, more and more people want to find out about the company. This results in more and more traffic for the Web servers. Companies are using the Web for providing not only information but also for supporting their products. Fully indexed searches, dynamic content, and driver and patch downloads all result in increased loads on the Web servers. The Web services must be able to scale if the company is to keep up with the rest of the industry.

Beefy Boxes Versus Many Boxes

Traditionally, applications scaled by improving the performance of the hardware. Almost all applications ran on a single server and the only way to make it faster or to increase its capacity was to upgrade the server. Web services introduced the industry to an application where much of the data was static. Even today's dynamic sites are mostly static frameworks with bits of data read from another system. This created an environment where a large portion of the data was read-only. This meant that data could be replicated to multiple locations and updated in batches. Changes would not be made by users to one system and need to be replicated to the others. This was a prime environment in which to use multiple servers and load balancers. By adding Web servers and giving them a local copy of the static content and pointing them to a central source for dynamic content performance could be scaled to amazing levels. Soon the Internet was filled with farms of hundreds of front-end Web servers servicing hundreds of millions of hits each day.

Using Cryptographic Accelerators for SSL

As the increase in Web server usage swept the Internet, new uses for Web servers were appearing. Traditional brick and mortar companies were doing business on the Internet. Security for these business transactions became a strict requirement. Companies turned to encryption to offer a secure method of doing business on the Internet. SSL, or Secure Socket Layer, became something of a de facto standard for encrypting Web traffic. The use of SSL requires the Web server to perform certain cryptographic processes on data. These processes take up CPU cycles and can quickly bog down a Web server. To continue to scale Web services with SSL, administrators continued to add more and more Web servers. The industry quickly realized that this was not an optimal solution and SSL accelerators were created. By offloading cryptographic processes onto a dedicated hardware device the CPU is freed up to perform other tasks. SSL encryption loads can reduce the performance of a Web server by as much as 75%. An SSL accelerator can return that performance without having to add servers. This reduces maintenance tasks and warranty costs and frees up valuable data center space.

SSL accelerators

Many load balancers, also known as layer 4-7 switches, are offering SSL acceleration. Other SSL accelerators come in the form of PCI cards destined for the servers themselves .


n- tier Application Model

Many Web-based applications start their lives as a single box that is a Web server, an application, and a data store. This works well for small applications and keeps the data neatly bundled in a single system. As these applications are scaled, a single box is often not sufficient to keep up with the needs of the application. To scale these types of applications it is useful to take the application to an n-tier model. By separating the database from the application an administrator is able to dedicate the performance of a system to being a database. This allows the system to be built and tuned with a specific database in mind. This allows it to scale well. The application layer often has different requirements as well. It might be demanding enough to warrant multiple application servers running in a load-balanced group to offer enough performance to keep up with the application. The Web layer can be scaled like any other Web server. By load-balancing a group of Web servers, they can be scaled to meet the demands of the users. By pointing them to the load-balanced application layer, they can take advantage of the distributed processing of the application. Those applications draw their data from the database and feed it up into the Web presentation layer. This type of model scales very well. As components of the system prove too demanding to share resources with other components, they are simply split off onto dedicated hardware.

Scaling Web Services via Web Farms

When the Dot Com boom first started to hit companies scrambled to build systems powerful enough to keep up with the demands of their users. Early Dot Com companies put up powerful Unix systems to run their Web sites. It was soon realized that this was a very inefficient method. Because the Dot Com world required resources to be accessible 24 hours a day, seven days a week it became very expensive to maintain redundant Unix systems. The concept of the Web Farm caught on very quickly. By running multiple Web servers the load from the user base was distributed across the systems through the use of a load-balancer. With this architecture, the environment could run with very inexpensive servers. The stability of the systems was not a great concern because if a server failed the other servers would take up the load. If the load became too high the administrator could simply add more Web servers to the farm. This became the de facto standard for high traffic Web sites. By replicating the content or by having the servers draw their content dynamically from another source the systems could be brought online very quickly and easily. Sites using this methodology have scaled to the point of being able to support more than 300 million hits per day.



Microsoft Windows Server 2003 Insider Solutions
Microsoft Windows Server 2003 Insider Solutions
ISBN: 0672326094
EAN: 2147483647
Year: 2003
Pages: 325

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net