There are two aspects of deploying caches. First, you must choose the right technology or combination of technologies to fit the problem at hand. Then you must place it in the architecture.
The true beauty of a cache is that it isn't needed for operation, just performance and scalability. As seen in Chapter 6, with the static cache, images could be served of the main site, but a cache was used for performance and scalability. If such immense scalability is no longer required, the architecture can easily be reverted to one without a caching system.
Most caching systems (except for transparent layered caches) do not have substantial operational costs, so their removal would not provide a substantial cost savings. In these cases, after an effective and efficient caching mechanism is deployed, it is senseless to remove it from the architecture unless it will be replaced by a better solution.
After that long-winded explanation of different caching technologies, we can concentrate on the only one that really applies to scalable web architectures: distributed caching. However, the concepts from write-thru caching can be cleverly applied to distributed caching for further gain.
As mentioned previously, increasing performance isn't our primary goal. Caching solutions speed up things by their very nature, but the goal remains scalability. To build a scalable solution we hope to leverage these caches to reduce contention on a shared resource. Architecting a large site with absolutely no shared resources (for example, a centralized database or network attached storage) is challenging and not always feasible.