No one likes to lose data. No one likes to have the services they provide be inaccessible. No one likes to clean up messes. No one likes to spend money.
The last point is a real kicker, and although it may be absolutely ridiculous, the fact remains that companies of all shapes and sizes have a fear of investing in technology infrastructure.
Specifically, at this time in computing we are all just now beginning to recover from the dot-com bubble popping. Inside the bubble, online users had an unrealistic value to companies, business plans weren't sound, and a rough Internet-related idea was enough to get millions in funding.
Many times, in such initiatives, the architecture is brute-forceda huge investment in hardware and services to compensate for poor or understaffed engineering combined with too-short timelines. This, of course, was good for hardware vendors (Sun, EMC, Hitachi, Cisco, F5, the list is long). I'll be the last one to tell you to go buy cheap hardware. However, I also know that there are many unnecessarily powerful components driving today's architectures.
In economics, there is a law of diminishing returns. The basic concept is that when workers are added to the wheat field, at some point, each additional worker will contribute less than his predecessor. This was first thought to apply only to agriculture but was later adopted as a universal economic law applicable to all productive enterprises. As with most laws in unrelated fields, we see parallels in the field of computer science.
Computers get faster every year. Moore's Law says they double in speed/capacity/performance every 18 months. That sounds great, but when a purchase is made and an architecture is designed, there is a single "market offering." In other words, at the time of the purchase you have a snapshot of a rapidly changing landscape. This means that what you buy today will be obsolete tomorrow and that you should count on your architecture running on yesterday's technology. Combine this with the law of diminishing returns, and you can see something more profound.
The "best" technology that can be bought today is expensive. Effectively, at the top end of the performance curve, more and more buys you less and less. So, you can blow your budget on the fastest, biggest, shiniest thing out there knowing that it will be inexpensive tomorrow and obsolete the next day, or you can leverage a better return on investment by buying today's (or even yesterday's) commodity hardware.
Why is this so? During the dot-com era (and even today) companies tend to buy the fastest hardware they can afford in order to "scale." Perhaps by now you can see the fundamental flaw in that mentality. If the intention is to accomplish seamless scalability by purchasing a large "fast" monolithic machine, you will inevitably saturate its capabilities and learn the difference between scalability and performance the hard way. Scale out, not uphorizontal, not vertical.
Scalability is the goal, but there are some other commonly overlooked challenges when working with big architectures. The dot com era taught us something that no businessperson would have ever believed before. It taught us that it is possible to take a concept at 8 a.m.; translate that into a business initiative; proceed through design, implementation, testing, and launch; and have millions of customers for that idea by the close of business. That fact that this is possible means that if the business can capitalize on such efficiencies, it will.
Many techniques attempt to handle the issues of rapid development. One such popular approach is that of extreme programming. Regardless of the technique involved, the fact that the solution not only has to work, but it also has to scale dramatically changes the playing field and complicates the rules of the game.
The rest of this book attempts to get you "thinking scalable." We will spend some time first on tried-and-true techniques that can help prevent disaster and speed recovery of mistakes due to rushed timelines or lapses in judgment.