1.1 A Brief History

much as an  order of magnitude over commercial systems of comparable capabilities. This level of cost benefit has also been achieved for sustained performance for many, but not all, important classes of applications. Recognition of the dramatic improvement possible through Beowulf systems was received by a combined team of NASA and Los Alamos researchers who were awarded the 1997 Gordon Bell Prize for Price/Performance. The Los Alamos team again won the Gordon Bell Prize in 1998 for a second Beowulf-class machine. With an order of magnitude price/performance superiority, Beowulfs open entirely new modes of system usage, making them available in areas previously without access to high end computing. Not since the advent of the minicomputer in the mid-1960s or the microprocessor in the mid-1970s has there been such a wealth of new opportunities.
1.1 A Brief History
In 1993, most of the conditions necessary for the emergence of PC clusters were in place. The Intel 80386 processor was a major performance advance over its 80286 predecessor, DRAM densities and costs were such that 8 MBytes was within budget, disk drives of a hundred MBytes or more were available for PC configurations, Ethernet (10 Mbps) interface controllers were available for PCs, and hubs (not switches) were cheap enough to consider using for small cluster configurations. In addition, an early version of Linux was undergoing rapid evolution, and PVM was achieving stature as the first major cross-platform parallel programming message passing model to achieve wide acceptance. Also, substantial experience had been gained by the high performance computing (HPC) community in programming MPPs due in part to the Federal High Performance Computing and Communication (HPCC) Program. At the same time, a number of universities were engaging in efforts to apply workstation clusters to real problems and develop software tools to facilitate their use.
What was missing was the opportunity to bring these still relatively weak system components together to address real problems. The NASA HPCC Program Earth and Space Sciences Project at the Goddard Space Flight Center had just such a problem. It required a single-user science station for holding, manipulating, and displaying large data sets produced as output by grand challenge applications running on MIPPs. It had to cost no more than a high end scientific workstation (< $50K), store at least 10 GBytes of data, and provide high bandwidth from secondary storage to the system display console. An initial requirement for near 1 Gflops peak performance was also specified. An analysis of commercial systems

 



How to Build a Beowulf
How to Build a Beowulf: A Guide to the Implementation and Application of PC Clusters (Scientific and Engineering Computation)
ISBN: 026269218X
EAN: 2147483647
Year: 1999
Pages: 134

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net