Distributed computation of wave propagation models using PVM (original) (raw)
Related papers
Modelling wave propagation on parallel computers
SEG Technical Program Expanded Abstracts 2001, 2001
We present a methodology to model 2D and 3D wave propagation in the subsurface of the earth, consisting of a sequence of techniques leading to an algorithm which is specially powerful on parallel computers.
SEG Technical Program Expanded Abstracts 1997, 1997
Seismic wave modelling algorithms used for calculating the seismic response of a given earth model, require large computational resources in terms of speed and memory. In this paper we describe the PVM (Parallel Virtual Machine) implementation of these algorithms in a distributed computing environment. Both the acoustic and elastic wave modelling equations are formulated as a first order hyperbolic system. Numerical solution uses an explicit finite difference scheme, which is fourth order accurate in space and second order accurate in time. A domain decomposition algorithm is used for distributing the workload and the tasks communicate via PVM message passing calls. The efficiency and speed of the algorithms is tested on a cluster of SUN UltraSparc workstations.
Lecture Notes in Computer Science, 2008
We use 2166 processors of the MareNostrum (IBM PowerPC 970) supercomputer to model seismic wave propagation in the inner core of the Earth following an earthquake. Simulations are performed based upon the spectral-element method, a high-degree finite-element technique with an exactly diagonal mass matrix. We use a mesh with 21 billion grid points (and therefore approximately 21 billion degrees of freedom because a scalar unknown is used in most of the mesh). A total of 2.5 terabytes of memory is needed. Our implementation is purely based upon MPI. We optimize it using the ParaVer analysis tool in order to significantly improve load balancing and therefore overall performance. Cache misses are reduced based upon renumbering of the mesh points.
Large Scale Parallel Simulation and Visualization of 3D Seismic Wavefield Using the Earth Simulator
2004
Recent developments of the Earth Simula- tor, a high-performance parallel computer, has made it possible to realize realistic 3D simulations of seismic wave propagations on a regional scale including higher frequencies. Paralleling this development, the deploy- ment of dense networks of strong ground motion instru- ments in Japan (K-NET and KiK-net) has now made it possibleto directlyvisualizeregional seismicwave prop- agation
International Conference For High Performance Computing, Networking, Storage and Analysis, 2008
SPECFEM3D_GLOBE is a spectral-element application enabling the simulation of global seismic wave propagation in 3D anelastic, anisotropic, rotating and self-gravitating Earth models at unprecedented resolution. A fundamental challenge in global seismology is to model the propagation of waves with periods between 1 and 2 seconds, the highest frequency signals that can propagate clear across the Earth. These waves help reveal the 3D structure of the Earth's deep interior and can be compared to seismographic recordings. We broke the 2 second barrier using the 62K processor Ranger system at TACC. Indeed we broke the barrier using just half of Ranger, by reaching a period of 1.84 seconds with sustained 28.7 Tflops on 32K processors. We obtained similar results on the XT4 Franklin system at NERSC and the XT4 Kraken system at University of Tennessee Knoxville, while a similar run on the 28K processor Jaguar system at ORNL, which has more memory per processor, sustained 35.7 Tflops (a higher flops rate) with a 1.94 shortest period. For the final run we obtained access to the ORNL Petaflop System, a new very large XT5 just coming online, and achieved 1.72 shortest period and 161 Tflops using 149,784 cores.
Benchmark Study of a 3d Parallel Code for the Propagation of Large Subduction Earthquakes
2008
Benchmark studies were carried out on a recently optimized parallel 3D seismic wave propagation code that uses finite differences on a staggered grid with 2 nd order operators in time and 4 th order in space. Three dual-core supercomputer platforms were used to run the parallel program using MPI. Efficiencies of 0.91 and 0.48 with 1024 cores were obtained on HECToR (UK) and KanBalam (Mexico), and 0.66 with 8192 cores on HECToR. The 3D velocity field pattern from a simulation of the 1985 Mexico earthquake (that caused the loss of up to 30000 people and about 7 billion US dollars) which has reasonable agreement with the available observations, shows coherent, well developed surface waves propagating towards Mexico City.
Proceedings of the 2003 ACM/IEEE conference on Supercomputing - SC '03, 2003
We use 1944 processors of the Earth Simulator to model seismic wave propagation resulting from large earthquakes. Simulations are conducted based upon the spectral-element method, a highdegree finite-element technique with an exactly diagonal mass matrix. We use a very large mesh with 5.5 billion grid points (14.6 billion degrees of freedom). We include the full complexity of the Earth, i.e., a three-dimensional wave-speed and density structure, a 3-D crustal model, ellipticity as well as topography and bathymetry. A total of 2.5 terabytes of memory is needed. Our implementation is purely based upon MPI, with loop vectorization on each processor. We obtain an excellent vectorization ratio of 99.3%, and we reach a performance of 5 teraflops (30% of the peak performance) on 38% of the machine. The very high resolution of the mesh allows us to perform fully three-dimensional calculations at seismic periods as low as 5 seconds.
Scalable Earthquake Simulation on Petascale Supercomputers
2010
Petascale simulations are needed to understand the rupture and wave dynamics of the largest earthquakes at shaking frequencies required to engineer safe structures (> 1 Hz). Toward this goal, we have developed a highly scalable, parallel application (AWP-ODC) that has achieved “M8”: a full dynamical simulation of a magnitude-8 earthquake on the southern San Andreas fault up to 2 Hz. M8 was calculated using a uniform mesh of 436 billion 40-m3 cubes to represent the three-dimensional crustal structure of Southern California, in a 800 km by 400 km area, home to over 20 million people. This production run producing 360 sec of wave propagation sustained 220 Tflop/s for 24 hours on NCCS Jaguar using 223,074 cores. As the largest-ever earthquake simulation, M8 opens new territory for earthquake science and engineering - the physics-based modeling of the largest seismic hazards with the goal of reducing their potential for loss of life and property.
Parallel visualization of seismic wave propagation
Visual Geosciences, 2008
Today parallel visualization of massive datasets from observation and numerical simulation of seismic waves is one of the major goals of geoscience community. A majority of these datasets are time-varying volume data (TVVD), also known as 4D field data. The difficulty of visualizing them on distributed parallel system mainly lies in the algorithm designing for distributed preprocessing of raw datasets, hierarchical point-to-point or collective communication implementation based on distributed data allocation, synchronous volume rendering techniques. In this work we present viable solutions for preprocessing of raw data sets, novel algorithms of parallel rendering and display matrix. Our main objective is focused on the parallel visualization of results coming from full 4D seismic wave propagation simulations.
A Parallel Visualization Pipeline for Terascale Earthquake Simulations
Proceedings of the ACM/IEEE SC2004 Conference, 2004
This paper presents a parallel visualization pipeline implemented at the Pittsburgh Supercomputing Center (PSC) for studying the largest earthquake simulation ever performed. The simulation employs 100 million hexahedral cells to model 3D seismic wave propagation of the 1994 Northridge earthquake. The time-varying dataset produced by the simulation requires terabytes of storage space. Our solution for visualizing such terascale simulations is based on a parallel adaptive rendering algorithm coupled with a new parallel I/O strategy which effectively reduces interframe delay by dedicating some processors to I/O and preprocessing tasks. In addition, a 2D vector field visualization method and a 3D enhancement technique are incorporated into the parallel visualization framework to help scientists better understand the wave propagation both on and under the ground surface. Our test results on the HP/Compaq AlphaServer operated at the PSC show that we can completely remove the I/O bottlenecks commonly present in time-varying data visualization. The high-performance visualization solution we provide to the scientists allows them to explore their data in the temporal, spatial, and variable domains at high resolution. The new high-resolution explorability, likely not available to most computational science groups, will help lead to many new insights.