Compute as Fast as the Engineers Can Think!---Ultrafast Computing Team Final Report (original) (raw)

Special issue on “Theory and practice of high-performance computing, communications, and security”

The Journal of Supercomputing, 2011

With the rapid growth in computing and communication technology, the past decade has witnessed a proliferation of powerful parallel and distributed systems and an ever increasing demand for practical applications of high-performance computing, communications, and security (HPCCS). HPCC has moved into the mainstream of computing and has become a key technology in determining future research and development activities in many academic and industrial branches, especially when the solution of large and complex problems must cope with very tight timing schedules.

High Performance Computing in Science and Engineering’ 04

Springer eBooks, 2005

The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

High-Performance Computing: A Primer

2017

Computational science is well on its way to enter into the exascale era. High-performance computing emerged to meet the need of increasing demands for processing speed. High-performance computers have evolved from MFlops to GFlops to PFlops scale over the past two decades. (Flop = floating-point operations per second). High-performance computing is fast computing using high-performance computers such as supercomputers. HPC has become determinants of industrial competitiveness and advanced research in several areas. This paper presents a brief introduction to high-performance computing.

Highly Parallel Computation

Science, 1990

Highly parallel computing architectures are the only means to achieve the computational rates demanded by advanced scientific problems. A decade of research has demonstrated the feasibility of such machines, and current research focuses on which architectures are best suited for particular dasses of problems. The architectures designated as MIMD and SIMD have produced the best results to date; neither shows a decisive advantage for most near-homogeneous scientific problems. For scientific problems with many dissimilar parts, more speculative architectures such as neural networks or data flow may be needed. C OMPUTATION HAS EMERGED AS AN IMPORTANT NEW method in science. It gives access to solutions offundamental problems that pure analysis and pure experiment cannot reach. Aerospace engineers, for example, estimate that a complete numerical simulation of an aircraft in flight could be performed in a matter of hours on a supercomputer capable of sustaining at least 1 trillion floating point operations per second (teraflops, or tflops). Researchers in materials analysis, oil exploration, circuit design, visual recognition, high-energy physics, cosmology, earthquake prediction, atmospherics, oceanography, and other disciplines report that breakthroughs are likely with machines that can compute at a tflops rate.

High Performance Computing in Science and Engineering 2000

Springer eBooks, 2001

The use of general descriptive names. registered names, trademarks. etc. in this publication does not imply, even in the absence of a specific statement. that such names are exempt from the relevant protective laws and regulations and therefore free for general use.

New trends in high performance computing

Parallel Computing, 2001

The series is named in honour of George Boole, the first professor of Mathematics at UCC, whose seminal work on logic in the late 1800s is central to modern digital computing. To mark this great contribution, leaders in the fields of computing and mathematics are invited to talk to the general public on directions in science, on past achievements and on visions for the future.