An International Computer Experiment (original) (raw)

The Present and Future of QCD

arXiv (Cornell University), 2023

Abstract: (arXiv) This White Paper presents the community inputs and scientific conclusions from the Hot and Cold QCD Town Meeting that took place September 23-25, 2022 at MIT, as part of the Nuclear Science Advisory Committee (NSAC) 2023 Long Range Planning process. A total of 424 physicists registered for the meeting. The meeting highlighted progress in Quantum Chromodynamics (QCD) nuclear physics since the 2015 LRP (LRP15) and identified key questions and plausible paths to obtaining answers to those questions, defining priorities for our research over the coming decade. In defining the priority of outstanding physics opportunities for the future, both prospects for the short (~ 5 years) and longer term (5-10 years and beyond) are identified together with the facilities, personnel and other resources needed to maximize the discovery potential and maintain United States leadership in QCD physics worldwide. This White Paper is organized as follows: In the Executive Summary, we detail the Recommendations and Initiatives that were presented and discussed at the Town Meeting, and their supporting rationales. Section 2 highlights major progress and accomplishments of the past seven years. It is followed, in Section 3, by an overview of the physics opportunities for the immediate future, and in relation with the next QCD frontier: the EIC. Section 4 provides an overview of the physics motivations and goals associated with the EIC. Section 5 is devoted to the workforce development and support of diversity, equity and inclusion. This is followed by a dedicated section on computing in Section 6. Section 7 describes the national need for nuclear data science and the relevance to QCD research. Note: QCD Town Meeting White Paper, as submitted to 2023 NSAC LRP committee on Feb. 28, 2023

Overview of the QCDSP and QCDOC computers

Ibm Journal of Research and Development, 2005

The QCDSP and QCDOC computers are two generations of multithousand-node multidimensional mesh-based computers designed to study quantum chromodynamics (QCD), the theory of the strong nuclear force. QCDSP (QCD on digital signal processors), a four-dimensional mesh machine, was completed in 1998; in that year, it won the Gordon Bell Prize in the price/performance category. Two large installations-of 8,192 and 12,288 nodes, with a combined peak speed of one teraflops-have been in operation since. QCD-on-a-chip (QCDOC) utilizes a sixdimensional mesh and compute nodes fabricated with IBM systemon-a-chip technology. It offers a tenfold improvement in price/ performance. Currently, 100-node versions are operating, and there are plans to build three 12,288-node, 10-teraflops machines. In this paper, we describe the architecture of both the QCDSP and QCDOC machines, the operating systems employed, the user software environment, and the performance of our applicationlattice QCD. IBM J. RES. & DEV. VOL. 49 NO. 2/3 MARCH/MAY 2005 P. A. BOYLE ET AL.

QCDOC: project status and first results

Journal of Physics: Conference Series, 2005

After a brief overview of Quantum Chromodynamics (QCD), the fundamental theory of the strong interactions, we describe the QCDOC computer, its architecture, construction, software and performance. Three 12K-node, 4 Teraflops (sustained) QCDOC computers have been constructed, two at the Brookhaven National Lab and one at the University of Edinburgh. The present status of these machines and their first physics results and objectives are discussed and the catalytic role of the SciDAC program in enabling the effective use of this new architecture by the U.S. lattice QCD community outlined.

Report of the QCD tools working group

Arxiv preprint hep-ph/ …, 2000

We report on the activities of the “QCD Tools for heavy flavors and new physics searches” working group of the Run II Workshop on QCD and Weak Bosons. The contributions cover the topics of improved parton showering and comparisons of Monte Carlo programs and ...

US QCD computational performance studies with PERI

Journal of Physics: Conference Series, 2007

We report on some of the interactions between two SciDAC projects: The National Computational Infrastructure for Lattice Gauge Theory (USQCD), and the Performance Engineering Research Institute (PERI). Many modern scientific programs consistently report the need for faster computational resources to maintain global competitiveness. However, as the size and complexity of emerging high end computing (HEC) systems continue to rise, achieving good performance on such systems is becoming ever more challenging. In order to take full advantage of the resources, it is crucial to understand the characteristics of relevant scientific applications and the systems these applications are running on. Using tools developed under PERI and by other performance measurement researchers, we studied the performance of two applications, MILC and Chroma, on several high performance computing systems at DOE laboratories. In the case of Chroma, we discuss how the use of C++ and modern software engineering and programming methods are driving the evolution of performance tools.