| < Day Day Up > |
|
In Chapter 2, the possible degradation of system performance due to an increase in user access was discussed. When this occurs, one solution explored was to increase the processing power of the computer by providing it with additional resources and memory, i.e., vertically scaling the hardware. While this is not ideal in most situations, it does potentially help provide a temporary solution toward supporting the increased number of users.
Another solution was to add additional hardware and scale the number of users in a linear fashion. This would allow for distribution of users across multiple machines, thus providing linear scalability and availability. In this solution, availability would be present because other machines would be available to provide continuous access in the case of failure of one machine.
If we had one large complex process and a computer with a single processor, this would take time to complete depending on many other factors. In this situation, one option is to increase the speed of the existing processor (like increasing the resources on the current hardware) to accept more workload. A given processor is made to specifications, it is made to perform at a certain speed; the same processor could not be increased in speed. However, the processor could be swapped with a higher-speed processor, provided that the hardware that uses it will support this new architecture. This scenario is another flavor of vertical scalability. Instead of replacing the processor with another processor that is of a higher speed, an additional processor is added, providing twice the processing power and aiding in the distribution of work amongst the two processors.
This could be taken even further if you consider three or four processors or multiple computers or nodes each with multiple processors. All these processors and computers or nodes could be put to use simultaneously to perform functions in parallel.
Advantages of moving toward a parallel processing concept include:
The price of a single CPU grows linearly with speed; however, with parallel processing this linear increase in price cannot be noticed.
Message-passing parallel computers can be built using off-the-shelf components and processors, thus reducing development time.
Adding one big processor to do an entire job would be more expensive compared to adding many smaller processors, which provides distribution of workload.
Running a program in parallel on multiple processors is faster than running the same program on a single processor.
A system can be scaled or built up gradually. If, over time, a parallel system becomes too small for the tasks needed, additional processors can be added to meet the new requirements.
In Chapter 2, various hardware opportunities that are available to support the parallel concepts, such as clustered SMP, MPP, and clustered NUMA, were examined. These clustered solutions provide linear scalability, help in distribution of workload, and provide availability. Due to the scalability factors built into these hardware architectures, they are all potential platforms for parallel processing.
| < Day Day Up > |
|