Salman Habib - Profile on Academia.edu (original) (raw)

Papers by Salman Habib

Research paper thumbnail of Improving Data Mobility & Management for International Cosmology: Summary Report of the CrossConnects 2015 Workshop

Research paper thumbnail of A Second-Order Stochastic Leap-Frog Algorithm for Langevin Simulation

Conference title not supplied, Conference location not supplied, Conference dates not supplied, Aug 1, 2000

Langevin simulation provides an effective way to study col-Iisional effects in beams by reducing ... more Langevin simulation provides an effective way to study col-Iisional effects in beams by reducing the six-dimensional Fokker-Planck equation to a group of stochastic ordinary differential equations. These resulting equations usually have multiplicative noise since the diffusion coeftlcients in these equations are functions of position and time. Conventional algorithms, e.g. Euler and Heun, give only first order convergence of moments in a finite time interval. In this paper, a stochastic leap-frog algorithm for the numerical integration of Langevin stochastic differential equations with multiplicative noise is proposed and tested. The algorithm has a second-order convergence of momenta in a finite time interval and requires the sampling of only one uniformly distributed random variable per time step. As an example, we apply the new algorithm to the study of a mechanic oscillator with multiplicative noise.

Research paper thumbnail of Beam halo studies using a 3-dimensional particle-core model

Proceedings of the 1999 Particle Accelerator Conference (Cat. No.99CH36366), Jan 20, 2003

In this paper we present a study of beam halo based on a three-dimensional particle-core model of... more In this paper we present a study of beam halo based on a three-dimensional particle-core model of an ellipsoidal bunched beam in a constant focusing channel. For an initially mismatched beam, three linear envelope modes -a high frequency mode, a low frequency mode and a quadrupole mode -are identified. Stroboscopic plots are obtained for particle motion in the three modes. With higher focusing strength ratio, a 1:2 transverse parametric resonance between the test particle and core oscillation is observed for all three modes. The particle-high mode resonance has the largest amplitude and presents potentially the most dangerous beam halo in machine design and operation. For the longitudinal dynamics of a test particle, a 1:2 resonance is observed only between the particle and high mode oscillation, which suggests that the particle-high mode resonance will also be responsible for longitudinal beam halo formation.

Research paper thumbnail of Statistical mechanics of kinks in 1+1 dimensions: Numerical simulations and double-Gaussian approximation

Physical review, Dec 1, 1993

We investigate the thermal equilibrium properties of kinks in a classical Φ 4 field theory in 1 +... more We investigate the thermal equilibrium properties of kinks in a classical Φ 4 field theory in 1 + 1 dimensions. From large scale Langevin simulations we identify the temperature below which a dilute gas description of kinks is valid. The standard dilute gas/WKB description is shown to be remarkably accurate below this temperature. At higher, "intermediate" temperatures, where kinks still exist, this description breaks down. By introducing a double Gaussian variational ansatz for the eigenfunctions of the statistical transfer operator for the system, we are able to study this region analytically. In particular, our predictions for the number of kinks and the correlation length are in agreement with the simulations. The double Gaussian prediction for the characteristic temperature at which the kink description ultimately breaks down is also in accord with the simulations. We also analytically calculate the internal energy and demonstrate that the peak in the specific heat near the kink characteristic temperature is indeed due to kinks. In the neighborhood of this temperature there appears to be an intricate energy sharing mechanism operating between nonlinear phonons and kinks.

Research paper thumbnail of Numerical Methods for Stochastic Partial Differential Equations

This is the final report of a Laboratory Directed Research and Development (LDRD) project at the ... more This is the final report of a Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objectives of this proposal were (1) the development of methods for understanding and control of spacetime discretization errors in nonlinear stochastic partial differential equations, and (2) the development of new and improved practical numerical methods for the solutions of these equations. We have succeeded in establishing two methods for error control: the functional Fokker-Planck equation for calculating the time discretization error and the transfer integral method for calculating the spatial discretization error. In addition we have developed a new second-order stochastic algorithm for multiplicative noise applicable to the case of colored noises, and which requires only a single random sequence generation per time step. All of these results have been verified via high-resolution numerical simulations and have been successfully applied to physical test cases. We have also made substantial progress on a longstanding problem in the dynamics of unstable fluid interfaces in porous media. This work has lead to highly accurate quasi-analytic solutions of idealized versions of this problem. These may be of use in benchmarking numerical solutions of the full stochastic PDEs that govern real-world problems.

Research paper thumbnail of Self-Consistent Langevin Simulation of Coulomb Collisions in Charged-Particle Beams

In many plasma physics and charged-particle beam dynamics problems, Coulomb collisions modeled by... more In many plasma physics and charged-particle beam dynamics problems, Coulomb collisions modeled by a Fokker-Planck equation. In order to incorporate these collisions, we present a t dimensional parallel Langevin simulation method using a Particle-In-Cell (PIC) approach implemented on high-performance parallel computers. We perform, for the first time, a fully self-consistent simulation, in which the friction and diffusion coefficients are computed from first principles. We employ a two-dimensional domain decomposition approach within a message passing programming paradigm along with dynamic load balancing. Object oriented programming is used to encapsulate details of the communication syntax as well as to enhance reusability and extensibility. Performance tests on the SGI Origin 2000 and the Cray T3E-900 have demonstrated good scalability. Work is in progress to apply our technique to intrabeam scattering in accelerators.

Research paper thumbnail of US DOE Grand Challenge in Computational Accelerator Physics

Particle accelerators are playing an increasingly important role in basic and applied science, an... more Particle accelerators are playing an increasingly important role in basic and applied science, and are enabling new accelerator-driven technologies. But the design of nextgeneration accelerators, such as linear colliders and high intensity linacs, will require a major advance in numerical modeling capability due to extremely stringent beam control and beam loss requirements, and the presence of highly complex three-dimensional accelerator components. To address this situation, the U.S. Department of Energy has approved a "Grand Challenge" in Computational Accelerator Physics, whose primary goal is to develop a parallel modeling capability that will enable high performance, large scale simulations for the design, optimization, and numerical validation of next-generation accelerators. In this paper we report on the status of the Grand Challenge.

Research paper thumbnail of An object-oriented parallel particle-in-cell code for beam dynamics simulation in linear accelerators

We present an object-oriented three-dimensional parallel particle-in-cell (PIC) code for simulati... more We present an object-oriented three-dimensional parallel particle-in-cell (PIC) code for simulation of beam dynamics in linear accelerators (linacs). An important feature of this code is the use of split-operator methods to integrate single-particle magnetic optics techniques with parallel PIC techniques. By choosing a splitting scheme that separates the self-fields from the complicated externally applied fields, we are able to utilize a large step size and still retain high accuracy. The method employed is symplectic and can be generalized to arbitrarily high order accuracy if desired. A two-dimensional parallel domain decomposition approach is employed within a message-passing programming paradigm along with a dynamic load balancing scheme. Performance tests on an SGI/Cray T3E-900 and an SGI Origin 2000 show good scalability of the object-oriented code. We present, as an example, a simulation of high current beam transport in the accelerator production of tritium (APT) linac design.

Research paper thumbnail of Portability: A Necessary Approach for Future Scientific Software

arXiv (Cornell University), Mar 15, 2022

Today's world of scientific software for High Energy Physics (HEP) is powered by x86 code, while ... more Today's world of scientific software for High Energy Physics (HEP) is powered by x86 code, while the future will be much more reliant on accelerators like GPUs and FPGAs. The portable parallelization strategies (PPS) project of the High Energy Physics Center for Computational Excellence (HEP/CCE) is investigating solutions for portability techniques that will allow the coding of an algorithm once, and the ability to execute it on a variety of hardware products from many vendors, especially including accelerators. We think without these solutions, the scientific success of our experiments and endeavors is in danger, as software development could be expert driven and costly to be able to run on available hardware infrastructure. We think the best solution for the community would be an extension to the C++ standard with a very low entry bar for users, supporting all hardware forms and vendors. We are very far from that ideal though. We argue that in the future, as a community, we need to request and work on portability solutions and strive to reach this ideal.

Research paper thumbnail of Applied Nonlinear Stochastic Dynamics

Eli Ben-Naim (CNLS) v / Sergey Burtsev (CNLS/T-7) Roberto Camassa (T-7) L/ Shiyi Chen (CNLS)d G. ... more Eli Ben-Naim (CNLS) v / Sergey Burtsev (CNLS/T-7) Roberto Camassa (T-7) L/ Shiyi Chen (CNLS)d G. Cruz-Pacheco (UNAM) Charles Doering (U Michigan) Jinqiao Duan (Clemson) 4 Alp Findikoglu (MST-11)v Cyprian Foias (U Indiana) v Ildar Gabitov (Landau Institute of Theoretical Physics, Moscow) Peter Gent (NCAR) Salman Habib (T-8) v Akira Hasegawa (Osaka University) * Kalvis Jansons (UCL, London

Research paper thumbnail of High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

Computing plays an essential role in all aspects of high energy physics. As computational technol... more Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. The computing challenges require adopting new strategies in algorithms, software, and hardware at multiple levels in the HEP computational pyramid. A significant issue is the human element -the need for training a scientific and technical workforce that can make optimum use of state-of-the-art computational technologies and be ready to adapt as the landscape changes. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -1) software effectiveness, and 2) infrastructure and expertise advancement. These drivers had been identified in a number of previous studies, including the 2013 HEP Topical Panel on Computing, the 2013 Snowmass Study, and the 2014 P5 report. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. A choice was made to focus on offline computing in HEP experiments, even though there can be nontrivial connections between offline and online computing. This document begins with a summary of the main conclusions and directions contained in the three reports, as well as a statement of the cross-cutting themes that emerge from them. Because the scope of HEP computing is so wide, it was impossible to give every technical area its due in the necessarily finite space of the individual reports. By covering some computational activities in more detail than others, the aim has been to convey the key points that are independent of the individual research projects or science directions. The three main reports follow in order after the summary. The Applications Software Working Group undertook a survey of members of the HEP community to ensure a broad perspective in the report. Albeit not a complete sample of the HEP community, the respondents covered a range of experiments and projects. Several dozens of applications were discussed in the responses. This mass of information helped to identify some of the current strengths and weaknesses of the HEP computing effort. A number of conclusions have emerged from the reports. These include assessments of the current software base, consolidation and management of software packages, sharing of libraries and tools, reactions to hardware evolution (including storage and networks), and possibilities of exploiting new computational resources. The important role of schools and training programs in increasing awareness of modern software practices and computational architectures was emphasized. A thread running across the reports relates to the difficulties in establishing rewarding career paths for HEP computational scientists. Given the scale of modern software development, it is important to recognize a significant community-level software commitment as a technical undertaking that is on par with major detector R&D. Conclusions from the reports have ramifications for how computational activities are carried out across all of HEP. A subset of the conclusions have helped identify initial actionable items for HEP-FCE activities, with the goal of producing tangible results in finite time to benefit large fractions of the HEP community. These include applications of next-generation architectures, use of HPC resources for HEP experiments, data-intensive computing (virtualization and containers), and easy-to-use production-level wide area networking. A significant fraction of this work involves collaboration with DOE ASCR facilities and staff.

Research paper thumbnail of High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

Computing plays an essential role in all aspects of high energy physics. As computational technol... more Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. The computing challenges require adopting new strategies in algorithms, software, and hardware at multiple levels in the HEP computational pyramid. A significant issue is the human element -the need for training a scientific and technical workforce that can make optimum use of state-of-the-art computational technologies and be ready to adapt as the landscape changes. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -1) software effectiveness, and 2) infrastructure and expertise advancement. These drivers had been identified in a number of previous studies, including the 2013 HEP Topical Panel on Computing, the 2013 Snowmass Study, and the 2014 P5 report. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. A choice was made to focus on offline computing in HEP experiments, even though there can be nontrivial connections between offline and online computing. This document begins with a summary of the main conclusions and directions contained in the three reports, as well as a statement of the cross-cutting themes that emerge from them. Because the scope of HEP computing is so wide, it was impossible to give every technical area its due in the necessarily finite space of the individual reports. By covering some computational activities in more detail than others, the aim has been to convey the key points that are independent of the individual research projects or science directions. The three main reports follow in order after the summary. The Applications Software Working Group undertook a survey of members of the HEP community to ensure a broad perspective in the report. Albeit not a complete sample of the HEP community, the respondents covered a range of experiments and projects. Several dozens of applications were discussed in the responses. This mass of information helped to identify some of the current strengths and weaknesses of the HEP computing effort. A number of conclusions have emerged from the reports. These include assessments of the current software base, consolidation and management of software packages, sharing of libraries and tools, reactions to hardware evolution (including storage and networks), and possibilities of exploiting new computational resources. The important role of schools and training programs in increasing awareness of modern software practices and computational architectures was emphasized. A thread running across the reports relates to the difficulties in establishing rewarding career paths for HEP computational scientists. Given the scale of modern software development, it is important to recognize a significant community-level software commitment as a technical undertaking that is on par with major detector R&D. Conclusions from the reports have ramifications for how computational activities are carried out across all of HEP. A subset of the conclusions have helped identify initial actionable items for HEP-FCE activities, with the goal of producing tangible results in finite time to benefit large fractions of the HEP community. These include applications of next-generation architectures, use of HPC resources for HEP experiments, data-intensive computing (virtualization and containers), and easy-to-use production-level wide area networking. A significant fraction of this work involves collaboration with DOE ASCR facilities and staff.

Research paper thumbnail of The Mira-Titan Universe. III. Emulation of the Halo Mass Function

The Astrophysical Journal, Sep 16, 2020

We construct an emulator for the halo mass function over group and cluster mass scales for a rang... more We construct an emulator for the halo mass function over group and cluster mass scales for a range of cosmologies, including the effects of dynamical dark energy and massive neutrinos. The emulator is based on the recently completed Mira-Titan Universe suite of cosmological N -body simulations. The main set of simulations spans 111 cosmological models with 2.1 Gpc boxes. We extract halo catalogs in the redshift range z = [0.0, 2.0] and for masses M 200c ≥ 10 13 M /h. The emulator covers an 8-dimensional hypercube spanned by {Ω m h 2 , Ω b h 2 , Ω ν h 2 , σ 8 , h, n s , w 0 , w a }; spatial flatness is assumed. We obtain smooth halo mass functions by fitting piecewise second-order polynomials to the halo catalogs and employ Gaussian process regression to construct the emulator while keeping track of the statistical noise in the input halo catalogs and uncertainties in the regression process. For redshifts z 1, the typical emulator precision is better than 2% for 10 13 -10 14 M /h and < 10% for M 10 15 M /h. For comparison, fitting functions using the traditional universal form for the halo mass function can be biased at up to 30% at M 10 14 M /h for z = 0. Our emulator is publicly available at .

Research paper thumbnail of Fossil groups in 400D catalog

Fossil groups in 400D catalog

The Astrophysical Journal, 2008

... 2008), there may be "fossil phase" in the live of some clusters, Le. ... (1999) and... more ... 2008), there may be "fossil phase" in the live of some clusters, Le. ... (1999) and called as OLEG. However, in Diaz-Gimenez et al. (2008) on the base of SDSS data another bright galaxy inside half of the viral radius was found making this group not FG. ...

Research paper thumbnail of Modular Deep Learning Analysis of Galaxy-Scale Strong Lensing Images

arXiv (Cornell University), Nov 10, 2019

Strong gravitational lensing of astrophysical sources by foreground galaxies is a powerful cosmol... more Strong gravitational lensing of astrophysical sources by foreground galaxies is a powerful cosmological tool. While such lens systems are relatively rare in the Universe, the number of detectable galaxy-scale strong lenses is expected to grow dramatically with next-generation optical surveys, numbering in the hundreds of thousands, out of tens of billions of candidate images. Automated and efficient approaches will be necessary in order to find and analyze these strong lens systems. To this end, we implement a novel, modular, end-to-end deep learning pipeline for denoising, deblending, searching, and modeling galaxy-galaxy strong lenses (GGSLs). To train and quantify the performance of our pipeline, we create a dataset of 1 million synthetic strong lensing images using state-of-the-art simulations for next-generation sky surveys. When these pretrained modules were used as a pipeline for inference, we found that the classification (searching GGSL) accuracy improved significantly---from 82% with the baseline to 90%, while the regression (modeling GGSL) accuracy improved by 25% over the baseline.

Research paper thumbnail of The Dark Energy Spectroscopic Instrument (DESI)

arXiv (Cornell University), Jul 24, 2019

We present the status of the Dark Energy Spectroscopic Instrument (DESI) and its plans and opport... more We present the status of the Dark Energy Spectroscopic Instrument (DESI) and its plans and opportunities for the coming decade. DESI construction and its initial five years of operations are an approved experiment of the U.S. Department of Energy and is summarized here as context for the Astro2020 panel. Beyond 2025, DESI will require new funding to continue operations. We expect that DESI will remain one of the world's best facilities for wide-field spectroscopy throughout the decade. More about the DESI instrument and survey can be found at .

Research paper thumbnail of Why are we still using 3D masses for cluster cosmology?

Monthly Notices of the Royal Astronomical Society, Jun 17, 2022

The abundance of clusters of galaxies is highly sensitive to the late-time evolution of the matte... more The abundance of clusters of galaxies is highly sensitive to the late-time evolution of the matter distribution, since clusters form at the highest density peaks. However, the 3D cluster mass cannot be inferred without deprojecting the observations, introducing model-dependent biases and uncertainties due to the mismatch between the assumed and the true cluster density profile and the neglected matter along the sightline. Since projected aperture masses can be measured directly in simulations and observationally through weak lensing, we argue that they are better suited for cluster cosmology. Using the Mira-Titan suite of gravity-only simulations, we show that aperture masses correlate strongly with 3D halo masses, albeit with large intrinsic scatter due to the varying matter distribution along the sightline. Nonetheless, aperture masses can be measured ≈ 2 -3 times more precisely from observations, since they do not require assumptions about the density profile and are only affected by the shape noise in the weak lensing measurements. We emulate the cosmology dependence of the aperture mass function directly with a Gaussian process. Comparing the cosmology sensitivity of the aperture mass function and the 3D halo mass function for a fixed survey solid angle and redshift interval, we find the aperture mass sensitivity is higher for Ω m and w a , similar for σ 8 , n s , and w 0 , and slightly lower for h. With a carefully calibrated aperture mass function emulator, cluster cosmology analyses can use cluster aperture masses directly, reducing the sensitivity to model-dependent mass calibration biases and uncertainties.

Research paper thumbnail of Machine learning synthetic spectra for probabilistic redshift estimation: SYTH-Z

Monthly Notices of the Royal Astronomical Society, Jun 30, 2022

Photometric redshift estimation algorithms are often based on representative data from observatio... more Photometric redshift estimation algorithms are often based on representative data from observational campaigns. Data-driven methods of this type are subject to a number of potential deficiencies, such as sample bias and incompleteness. Motivated by these considerations, we propose using physically motivated synthetic spectral energy distributions in redshift estimation. In addition, the synthetic data would have to span a domain in colour-redshift space concordant with that of the targeted observational surveys. With a matched distribution and realistically modelled synthetic data in hand, a suitable regression algorithm can be appropriately trained; we use a mixture density network for this purpose. We also perform a zero-point re-calibration to reduce the systematic differences between noise-free synthetic data and the (unavoidably) noisy observational data sets. This new redshift estimation framework, -, demonstrates superior accuracy over a wide range of redshifts compared to baseline models trained on observational data alone. Approaches using realistic synthetic data sets can therefore greatly mitigate the reliance on expensive spectroscopic follow-up for the next generation of photometric surveys.

Research paper thumbnail of Parallel DTFE Surface Density Field Reconstruction

We improve the interpolation accuracy and efficiency of the Delaunay tessellation field estimator... more We improve the interpolation accuracy and efficiency of the Delaunay tessellation field estimator (DTFE) for surface density field reconstruction by proposing an algorithm that takes advantage of the adaptive triangular mesh for lineof-sight integration. The costly computation of an intermediate 3D grid is completely avoided by our method and only optimally chosen interpolation points are computed, thus, the overall computational cost is significantly reduced. The algorithm is implemented as a parallel shared-memory kernel for large-scale grid rendered field reconstructions in our distributed-memory framework designed for N-body gravitational lensing simulations in large volumes. We also introduce a load balancing scheme to optimize the efficiency of processing a large number of field reconstructions. Our results show our kernel outperforms existing software packages for volume weighted density field reconstruction, achieving ∼10× speedup, and our load balancing algorithm gains an additional ∼3.6× speedup at scales with ∼16k processes.

Research paper thumbnail of The Completed SDSS-IV Extended Baryon Oscillation Spectroscopic Survey: <i>N</i>-body Mock Challenge for Galaxy Clustering Measurements

Monthly Notices of the Royal Astronomical Society, Dec 30, 2020

HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific r... more HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

Research paper thumbnail of Improving Data Mobility & Management for International Cosmology: Summary Report of the CrossConnects 2015 Workshop

Research paper thumbnail of A Second-Order Stochastic Leap-Frog Algorithm for Langevin Simulation

Conference title not supplied, Conference location not supplied, Conference dates not supplied, Aug 1, 2000

Langevin simulation provides an effective way to study col-Iisional effects in beams by reducing ... more Langevin simulation provides an effective way to study col-Iisional effects in beams by reducing the six-dimensional Fokker-Planck equation to a group of stochastic ordinary differential equations. These resulting equations usually have multiplicative noise since the diffusion coeftlcients in these equations are functions of position and time. Conventional algorithms, e.g. Euler and Heun, give only first order convergence of moments in a finite time interval. In this paper, a stochastic leap-frog algorithm for the numerical integration of Langevin stochastic differential equations with multiplicative noise is proposed and tested. The algorithm has a second-order convergence of momenta in a finite time interval and requires the sampling of only one uniformly distributed random variable per time step. As an example, we apply the new algorithm to the study of a mechanic oscillator with multiplicative noise.

Research paper thumbnail of Beam halo studies using a 3-dimensional particle-core model

Proceedings of the 1999 Particle Accelerator Conference (Cat. No.99CH36366), Jan 20, 2003

In this paper we present a study of beam halo based on a three-dimensional particle-core model of... more In this paper we present a study of beam halo based on a three-dimensional particle-core model of an ellipsoidal bunched beam in a constant focusing channel. For an initially mismatched beam, three linear envelope modes -a high frequency mode, a low frequency mode and a quadrupole mode -are identified. Stroboscopic plots are obtained for particle motion in the three modes. With higher focusing strength ratio, a 1:2 transverse parametric resonance between the test particle and core oscillation is observed for all three modes. The particle-high mode resonance has the largest amplitude and presents potentially the most dangerous beam halo in machine design and operation. For the longitudinal dynamics of a test particle, a 1:2 resonance is observed only between the particle and high mode oscillation, which suggests that the particle-high mode resonance will also be responsible for longitudinal beam halo formation.

Research paper thumbnail of Statistical mechanics of kinks in 1+1 dimensions: Numerical simulations and double-Gaussian approximation

Physical review, Dec 1, 1993

We investigate the thermal equilibrium properties of kinks in a classical Φ 4 field theory in 1 +... more We investigate the thermal equilibrium properties of kinks in a classical Φ 4 field theory in 1 + 1 dimensions. From large scale Langevin simulations we identify the temperature below which a dilute gas description of kinks is valid. The standard dilute gas/WKB description is shown to be remarkably accurate below this temperature. At higher, "intermediate" temperatures, where kinks still exist, this description breaks down. By introducing a double Gaussian variational ansatz for the eigenfunctions of the statistical transfer operator for the system, we are able to study this region analytically. In particular, our predictions for the number of kinks and the correlation length are in agreement with the simulations. The double Gaussian prediction for the characteristic temperature at which the kink description ultimately breaks down is also in accord with the simulations. We also analytically calculate the internal energy and demonstrate that the peak in the specific heat near the kink characteristic temperature is indeed due to kinks. In the neighborhood of this temperature there appears to be an intricate energy sharing mechanism operating between nonlinear phonons and kinks.

Research paper thumbnail of Numerical Methods for Stochastic Partial Differential Equations

This is the final report of a Laboratory Directed Research and Development (LDRD) project at the ... more This is the final report of a Laboratory Directed Research and Development (LDRD) project at the Los Alamos National Laboratory (LANL). The objectives of this proposal were (1) the development of methods for understanding and control of spacetime discretization errors in nonlinear stochastic partial differential equations, and (2) the development of new and improved practical numerical methods for the solutions of these equations. We have succeeded in establishing two methods for error control: the functional Fokker-Planck equation for calculating the time discretization error and the transfer integral method for calculating the spatial discretization error. In addition we have developed a new second-order stochastic algorithm for multiplicative noise applicable to the case of colored noises, and which requires only a single random sequence generation per time step. All of these results have been verified via high-resolution numerical simulations and have been successfully applied to physical test cases. We have also made substantial progress on a longstanding problem in the dynamics of unstable fluid interfaces in porous media. This work has lead to highly accurate quasi-analytic solutions of idealized versions of this problem. These may be of use in benchmarking numerical solutions of the full stochastic PDEs that govern real-world problems.

Research paper thumbnail of Self-Consistent Langevin Simulation of Coulomb Collisions in Charged-Particle Beams

In many plasma physics and charged-particle beam dynamics problems, Coulomb collisions modeled by... more In many plasma physics and charged-particle beam dynamics problems, Coulomb collisions modeled by a Fokker-Planck equation. In order to incorporate these collisions, we present a t dimensional parallel Langevin simulation method using a Particle-In-Cell (PIC) approach implemented on high-performance parallel computers. We perform, for the first time, a fully self-consistent simulation, in which the friction and diffusion coefficients are computed from first principles. We employ a two-dimensional domain decomposition approach within a message passing programming paradigm along with dynamic load balancing. Object oriented programming is used to encapsulate details of the communication syntax as well as to enhance reusability and extensibility. Performance tests on the SGI Origin 2000 and the Cray T3E-900 have demonstrated good scalability. Work is in progress to apply our technique to intrabeam scattering in accelerators.

Research paper thumbnail of US DOE Grand Challenge in Computational Accelerator Physics

Particle accelerators are playing an increasingly important role in basic and applied science, an... more Particle accelerators are playing an increasingly important role in basic and applied science, and are enabling new accelerator-driven technologies. But the design of nextgeneration accelerators, such as linear colliders and high intensity linacs, will require a major advance in numerical modeling capability due to extremely stringent beam control and beam loss requirements, and the presence of highly complex three-dimensional accelerator components. To address this situation, the U.S. Department of Energy has approved a "Grand Challenge" in Computational Accelerator Physics, whose primary goal is to develop a parallel modeling capability that will enable high performance, large scale simulations for the design, optimization, and numerical validation of next-generation accelerators. In this paper we report on the status of the Grand Challenge.

Research paper thumbnail of An object-oriented parallel particle-in-cell code for beam dynamics simulation in linear accelerators

We present an object-oriented three-dimensional parallel particle-in-cell (PIC) code for simulati... more We present an object-oriented three-dimensional parallel particle-in-cell (PIC) code for simulation of beam dynamics in linear accelerators (linacs). An important feature of this code is the use of split-operator methods to integrate single-particle magnetic optics techniques with parallel PIC techniques. By choosing a splitting scheme that separates the self-fields from the complicated externally applied fields, we are able to utilize a large step size and still retain high accuracy. The method employed is symplectic and can be generalized to arbitrarily high order accuracy if desired. A two-dimensional parallel domain decomposition approach is employed within a message-passing programming paradigm along with a dynamic load balancing scheme. Performance tests on an SGI/Cray T3E-900 and an SGI Origin 2000 show good scalability of the object-oriented code. We present, as an example, a simulation of high current beam transport in the accelerator production of tritium (APT) linac design.

Research paper thumbnail of Portability: A Necessary Approach for Future Scientific Software

arXiv (Cornell University), Mar 15, 2022

Today's world of scientific software for High Energy Physics (HEP) is powered by x86 code, while ... more Today's world of scientific software for High Energy Physics (HEP) is powered by x86 code, while the future will be much more reliant on accelerators like GPUs and FPGAs. The portable parallelization strategies (PPS) project of the High Energy Physics Center for Computational Excellence (HEP/CCE) is investigating solutions for portability techniques that will allow the coding of an algorithm once, and the ability to execute it on a variety of hardware products from many vendors, especially including accelerators. We think without these solutions, the scientific success of our experiments and endeavors is in danger, as software development could be expert driven and costly to be able to run on available hardware infrastructure. We think the best solution for the community would be an extension to the C++ standard with a very low entry bar for users, supporting all hardware forms and vendors. We are very far from that ideal though. We argue that in the future, as a community, we need to request and work on portability solutions and strive to reach this ideal.

Research paper thumbnail of Applied Nonlinear Stochastic Dynamics

Eli Ben-Naim (CNLS) v / Sergey Burtsev (CNLS/T-7) Roberto Camassa (T-7) L/ Shiyi Chen (CNLS)d G. ... more Eli Ben-Naim (CNLS) v / Sergey Burtsev (CNLS/T-7) Roberto Camassa (T-7) L/ Shiyi Chen (CNLS)d G. Cruz-Pacheco (UNAM) Charles Doering (U Michigan) Jinqiao Duan (Clemson) 4 Alp Findikoglu (MST-11)v Cyprian Foias (U Indiana) v Ildar Gabitov (Landau Institute of Theoretical Physics, Moscow) Peter Gent (NCAR) Salman Habib (T-8) v Akira Hasegawa (Osaka University) * Kalvis Jansons (UCL, London

Research paper thumbnail of High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

Computing plays an essential role in all aspects of high energy physics. As computational technol... more Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. The computing challenges require adopting new strategies in algorithms, software, and hardware at multiple levels in the HEP computational pyramid. A significant issue is the human element -the need for training a scientific and technical workforce that can make optimum use of state-of-the-art computational technologies and be ready to adapt as the landscape changes. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -1) software effectiveness, and 2) infrastructure and expertise advancement. These drivers had been identified in a number of previous studies, including the 2013 HEP Topical Panel on Computing, the 2013 Snowmass Study, and the 2014 P5 report. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. A choice was made to focus on offline computing in HEP experiments, even though there can be nontrivial connections between offline and online computing. This document begins with a summary of the main conclusions and directions contained in the three reports, as well as a statement of the cross-cutting themes that emerge from them. Because the scope of HEP computing is so wide, it was impossible to give every technical area its due in the necessarily finite space of the individual reports. By covering some computational activities in more detail than others, the aim has been to convey the key points that are independent of the individual research projects or science directions. The three main reports follow in order after the summary. The Applications Software Working Group undertook a survey of members of the HEP community to ensure a broad perspective in the report. Albeit not a complete sample of the HEP community, the respondents covered a range of experiments and projects. Several dozens of applications were discussed in the responses. This mass of information helped to identify some of the current strengths and weaknesses of the HEP computing effort. A number of conclusions have emerged from the reports. These include assessments of the current software base, consolidation and management of software packages, sharing of libraries and tools, reactions to hardware evolution (including storage and networks), and possibilities of exploiting new computational resources. The important role of schools and training programs in increasing awareness of modern software practices and computational architectures was emphasized. A thread running across the reports relates to the difficulties in establishing rewarding career paths for HEP computational scientists. Given the scale of modern software development, it is important to recognize a significant community-level software commitment as a technical undertaking that is on par with major detector R&D. Conclusions from the reports have ramifications for how computational activities are carried out across all of HEP. A subset of the conclusions have helped identify initial actionable items for HEP-FCE activities, with the goal of producing tangible results in finite time to benefit large fractions of the HEP community. These include applications of next-generation architectures, use of HPC resources for HEP experiments, data-intensive computing (virtualization and containers), and easy-to-use production-level wide area networking. A significant fraction of this work involves collaboration with DOE ASCR facilities and staff.

Research paper thumbnail of High Energy Physics Forum for Computational Excellence: Working Group Reports (I. Applications Software II. Software Libraries and Tools III. Systems)

Computing plays an essential role in all aspects of high energy physics. As computational technol... more Computing plays an essential role in all aspects of high energy physics. As computational technology evolves rapidly in new directions, and data throughput and volume continue to follow a steep trend-line, it is important for the HEP community to develop an effective response to a series of expected challenges. The computing challenges require adopting new strategies in algorithms, software, and hardware at multiple levels in the HEP computational pyramid. A significant issue is the human element -the need for training a scientific and technical workforce that can make optimum use of state-of-the-art computational technologies and be ready to adapt as the landscape changes. In order to help shape the desired response, the HEP Forum for Computational Excellence (HEP-FCE) initiated a roadmap planning activity with two key overlapping drivers -1) software effectiveness, and 2) infrastructure and expertise advancement. These drivers had been identified in a number of previous studies, including the 2013 HEP Topical Panel on Computing, the 2013 Snowmass Study, and the 2014 P5 report. The HEP-FCE formed three working groups, 1) Applications Software, 2) Software Libraries and Tools, and 3) Systems (including systems software), to provide an overview of the current status of HEP computing and to present findings and opportunities for the desired HEP computational roadmap. A choice was made to focus on offline computing in HEP experiments, even though there can be nontrivial connections between offline and online computing. This document begins with a summary of the main conclusions and directions contained in the three reports, as well as a statement of the cross-cutting themes that emerge from them. Because the scope of HEP computing is so wide, it was impossible to give every technical area its due in the necessarily finite space of the individual reports. By covering some computational activities in more detail than others, the aim has been to convey the key points that are independent of the individual research projects or science directions. The three main reports follow in order after the summary. The Applications Software Working Group undertook a survey of members of the HEP community to ensure a broad perspective in the report. Albeit not a complete sample of the HEP community, the respondents covered a range of experiments and projects. Several dozens of applications were discussed in the responses. This mass of information helped to identify some of the current strengths and weaknesses of the HEP computing effort. A number of conclusions have emerged from the reports. These include assessments of the current software base, consolidation and management of software packages, sharing of libraries and tools, reactions to hardware evolution (including storage and networks), and possibilities of exploiting new computational resources. The important role of schools and training programs in increasing awareness of modern software practices and computational architectures was emphasized. A thread running across the reports relates to the difficulties in establishing rewarding career paths for HEP computational scientists. Given the scale of modern software development, it is important to recognize a significant community-level software commitment as a technical undertaking that is on par with major detector R&D. Conclusions from the reports have ramifications for how computational activities are carried out across all of HEP. A subset of the conclusions have helped identify initial actionable items for HEP-FCE activities, with the goal of producing tangible results in finite time to benefit large fractions of the HEP community. These include applications of next-generation architectures, use of HPC resources for HEP experiments, data-intensive computing (virtualization and containers), and easy-to-use production-level wide area networking. A significant fraction of this work involves collaboration with DOE ASCR facilities and staff.

Research paper thumbnail of The Mira-Titan Universe. III. Emulation of the Halo Mass Function

The Astrophysical Journal, Sep 16, 2020

We construct an emulator for the halo mass function over group and cluster mass scales for a rang... more We construct an emulator for the halo mass function over group and cluster mass scales for a range of cosmologies, including the effects of dynamical dark energy and massive neutrinos. The emulator is based on the recently completed Mira-Titan Universe suite of cosmological N -body simulations. The main set of simulations spans 111 cosmological models with 2.1 Gpc boxes. We extract halo catalogs in the redshift range z = [0.0, 2.0] and for masses M 200c ≥ 10 13 M /h. The emulator covers an 8-dimensional hypercube spanned by {Ω m h 2 , Ω b h 2 , Ω ν h 2 , σ 8 , h, n s , w 0 , w a }; spatial flatness is assumed. We obtain smooth halo mass functions by fitting piecewise second-order polynomials to the halo catalogs and employ Gaussian process regression to construct the emulator while keeping track of the statistical noise in the input halo catalogs and uncertainties in the regression process. For redshifts z 1, the typical emulator precision is better than 2% for 10 13 -10 14 M /h and < 10% for M 10 15 M /h. For comparison, fitting functions using the traditional universal form for the halo mass function can be biased at up to 30% at M 10 14 M /h for z = 0. Our emulator is publicly available at .

Research paper thumbnail of Fossil groups in 400D catalog

Fossil groups in 400D catalog

The Astrophysical Journal, 2008

... 2008), there may be "fossil phase" in the live of some clusters, Le. ... (1999) and... more ... 2008), there may be "fossil phase" in the live of some clusters, Le. ... (1999) and called as OLEG. However, in Diaz-Gimenez et al. (2008) on the base of SDSS data another bright galaxy inside half of the viral radius was found making this group not FG. ...

Research paper thumbnail of Modular Deep Learning Analysis of Galaxy-Scale Strong Lensing Images

arXiv (Cornell University), Nov 10, 2019

Strong gravitational lensing of astrophysical sources by foreground galaxies is a powerful cosmol... more Strong gravitational lensing of astrophysical sources by foreground galaxies is a powerful cosmological tool. While such lens systems are relatively rare in the Universe, the number of detectable galaxy-scale strong lenses is expected to grow dramatically with next-generation optical surveys, numbering in the hundreds of thousands, out of tens of billions of candidate images. Automated and efficient approaches will be necessary in order to find and analyze these strong lens systems. To this end, we implement a novel, modular, end-to-end deep learning pipeline for denoising, deblending, searching, and modeling galaxy-galaxy strong lenses (GGSLs). To train and quantify the performance of our pipeline, we create a dataset of 1 million synthetic strong lensing images using state-of-the-art simulations for next-generation sky surveys. When these pretrained modules were used as a pipeline for inference, we found that the classification (searching GGSL) accuracy improved significantly---from 82% with the baseline to 90%, while the regression (modeling GGSL) accuracy improved by 25% over the baseline.

Research paper thumbnail of The Dark Energy Spectroscopic Instrument (DESI)

arXiv (Cornell University), Jul 24, 2019

We present the status of the Dark Energy Spectroscopic Instrument (DESI) and its plans and opport... more We present the status of the Dark Energy Spectroscopic Instrument (DESI) and its plans and opportunities for the coming decade. DESI construction and its initial five years of operations are an approved experiment of the U.S. Department of Energy and is summarized here as context for the Astro2020 panel. Beyond 2025, DESI will require new funding to continue operations. We expect that DESI will remain one of the world's best facilities for wide-field spectroscopy throughout the decade. More about the DESI instrument and survey can be found at .

Research paper thumbnail of Why are we still using 3D masses for cluster cosmology?

Monthly Notices of the Royal Astronomical Society, Jun 17, 2022

The abundance of clusters of galaxies is highly sensitive to the late-time evolution of the matte... more The abundance of clusters of galaxies is highly sensitive to the late-time evolution of the matter distribution, since clusters form at the highest density peaks. However, the 3D cluster mass cannot be inferred without deprojecting the observations, introducing model-dependent biases and uncertainties due to the mismatch between the assumed and the true cluster density profile and the neglected matter along the sightline. Since projected aperture masses can be measured directly in simulations and observationally through weak lensing, we argue that they are better suited for cluster cosmology. Using the Mira-Titan suite of gravity-only simulations, we show that aperture masses correlate strongly with 3D halo masses, albeit with large intrinsic scatter due to the varying matter distribution along the sightline. Nonetheless, aperture masses can be measured ≈ 2 -3 times more precisely from observations, since they do not require assumptions about the density profile and are only affected by the shape noise in the weak lensing measurements. We emulate the cosmology dependence of the aperture mass function directly with a Gaussian process. Comparing the cosmology sensitivity of the aperture mass function and the 3D halo mass function for a fixed survey solid angle and redshift interval, we find the aperture mass sensitivity is higher for Ω m and w a , similar for σ 8 , n s , and w 0 , and slightly lower for h. With a carefully calibrated aperture mass function emulator, cluster cosmology analyses can use cluster aperture masses directly, reducing the sensitivity to model-dependent mass calibration biases and uncertainties.

Research paper thumbnail of Machine learning synthetic spectra for probabilistic redshift estimation: SYTH-Z

Monthly Notices of the Royal Astronomical Society, Jun 30, 2022

Photometric redshift estimation algorithms are often based on representative data from observatio... more Photometric redshift estimation algorithms are often based on representative data from observational campaigns. Data-driven methods of this type are subject to a number of potential deficiencies, such as sample bias and incompleteness. Motivated by these considerations, we propose using physically motivated synthetic spectral energy distributions in redshift estimation. In addition, the synthetic data would have to span a domain in colour-redshift space concordant with that of the targeted observational surveys. With a matched distribution and realistically modelled synthetic data in hand, a suitable regression algorithm can be appropriately trained; we use a mixture density network for this purpose. We also perform a zero-point re-calibration to reduce the systematic differences between noise-free synthetic data and the (unavoidably) noisy observational data sets. This new redshift estimation framework, -, demonstrates superior accuracy over a wide range of redshifts compared to baseline models trained on observational data alone. Approaches using realistic synthetic data sets can therefore greatly mitigate the reliance on expensive spectroscopic follow-up for the next generation of photometric surveys.

Research paper thumbnail of Parallel DTFE Surface Density Field Reconstruction

We improve the interpolation accuracy and efficiency of the Delaunay tessellation field estimator... more We improve the interpolation accuracy and efficiency of the Delaunay tessellation field estimator (DTFE) for surface density field reconstruction by proposing an algorithm that takes advantage of the adaptive triangular mesh for lineof-sight integration. The costly computation of an intermediate 3D grid is completely avoided by our method and only optimally chosen interpolation points are computed, thus, the overall computational cost is significantly reduced. The algorithm is implemented as a parallel shared-memory kernel for large-scale grid rendered field reconstructions in our distributed-memory framework designed for N-body gravitational lensing simulations in large volumes. We also introduce a load balancing scheme to optimize the efficiency of processing a large number of field reconstructions. Our results show our kernel outperforms existing software packages for volume weighted density field reconstruction, achieving ∼10× speedup, and our load balancing algorithm gains an additional ∼3.6× speedup at scales with ∼16k processes.

Research paper thumbnail of The Completed SDSS-IV Extended Baryon Oscillation Spectroscopic Survey: <i>N</i>-body Mock Challenge for Galaxy Clustering Measurements

Monthly Notices of the Royal Astronomical Society, Dec 30, 2020

HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific r... more HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.