The ultimate goal of this cluster is to serve static content more cost effectively than the main dynamic web servers can. Instead of being vague, let's have a specific goal.
We want to build a static image serving cluster to handle the static content load from the peak hour of traffic to our www.example.com site. We've described peak traffic as 125 initial page loads per second and 500 subsequent page loads per second. Optimizing the architecture that serves the 625 dynamic pages per second will be touched on in later chapters. Here we will just try to make the infrastructure for static content delivery cost effective.
We need to do a bit of extrapolation to wind up with a usable number of peak static content requests per second. We can start with a clear upper bound by referring back to Table 6.1:
(125 initial visits / sec * 58 objects / initial visit) +
(500 subsequent visits / sec * 9 objects / subsequent visit) = 11750
As discussed earlier, this is an upper bound because it does not account for two forms of caching:
The actual reduction factor due to these external factors is dependent on what forward caching solutions ISPs have implemented, and how many users are affected by those caches and the long-term client-side caches those users have.
We could spend a lot of time here building a test environment that empirically determines a reasonable cache-induced reduction factor, but that sort of experiment is out of the scope of this book and adds very little to the example at hand. So, although you should be aware that there will be a nonzero cache-induced reduction factor, we will simplify our situation and assume that it is zero. Note that this is conservative and errs on the side of increased capacity.
Putting Your Larger or Smaller Site in Perspective
We can calculate from Table 6.1 that our average expected payload is slightly less than 2500 bytes: 11,750 requests/second * 2500 bytes/request = 29,375,000 bytes/second or 235MB/s.