Alexandru Cioaca - Academia.edu (original) (raw)
Papers by Alexandru Cioaca
Inverse problems are of utmost importance in many fields of science and engineering. In the varia... more Inverse problems are of utmost importance in many fields of science and engineering. In the variational approach inverse problems are formulated as constrained optimization problems, where the optimal estimate of the uncertain parameters is the minimizer of a certain cost function subject to the model constraints. The numerical solution of such optimization problems requires the derivatives of a chosen cost
Journal of Computational Physics, Oct 1, 2014
This paper develops a computational framework for optimizing the parameters of data assimilation ... more This paper develops a computational framework for optimizing the parameters of data assimilation systems in order to improve their performance. The approach formulates a continuous meta-optimization problem for parameters; the meta-optimization is constrained by the original data assimilation problem. The numerical solution process employs adjoint models and iterative solvers. The proposed framework is applied to optimize observation values, data weighting coefficients, and the location of sensors for a test problem. The ability to optimize a distributed measurement network is crucial for cutting down operating costs and detecting malfunctions.
Computational Geosciences, Sep 13, 2013
This paper presents a practical computational approach to quantify the effect of individual obser... more This paper presents a practical computational approach to quantify the effect of individual observations in estimating the state of a system. Such an analysis can be used for pruning redundant measurements, and for designing future sensor networks. The mathematical approach is based on computing the sensitivity of the reanalysis (unconstrained optimization solution) with respect to the data. The computational cost is dominated by the solution of a linear system, whose matrix is the Hessian of the cost function, and is only available in operator form. The right hand side is the gradient of a scalar cost function that quantifies the forecast error of the numerical model. The use of adjoint models to obtain the necessary first and second order derivatives is discussed. We study various strategies to accelerate the computation, including matrix-free iterative solvers, preconditioners, and an in-house multigrid solver. Experiments are conducted on both a small-size shallow-water equations model, and on a large-scale numerical weather prediction model, in order to illustrate the capabilities of the new methodology.
Procedia Computer Science, 2013
Data assimilation is an important data-driven application (DDDAS) where measurements of the real ... more Data assimilation is an important data-driven application (DDDAS) where measurements of the real system are used to constrain simulation results. This paper describes a methodology for dynamically configuring sensor networks in data assimilation systems based on numerical models of time-evolving differential equations. The proposed methodology uses the dominant model singular vectors, which reveal the directions of maximal error growth. New sensors are dynamically placed such as to minimize an estimation error energy norm. A shallow water test problem is used to illustrate our approach.
ABSTRACT Efficient use of exa-scale parallel architectures will require applications to display a... more ABSTRACT Efficient use of exa-scale parallel architectures will require applications to display a tremendous degree of concurrency in order to effectively employ hundreds of thousands to millions of cores. Large scale simulations of partial differential equations typically rely on a spatial domain decomposition approach, where the number of concurrent tasks is limited by the size of the spatial simulation domain. Time parallelism offers a promising approach to increase the degree of concurrency. Parareal is a popular, non-intrusive, iterative parallel in time algorithm that uses both low and high accuracy numerical solvers. While the high accuracy solutions are computed in parallel, the low accuracy ones are serial, which considerably hinders Parareal's scalability, and therefore its potential usefulness in exa-scale environments. This paper proposes a nonlinear optimization approach to exploiting time parallelism. Like in the traditional Parareal approach, the time interval is partitioned into subdomains, and local time integrations are carried out in parallel. The objective cost function quantifies the mismatch of local solutions between adjacent time subintervals. The optimization problem is solved iteratively using gradient-based methods. The necessary gradients, and Hessian-vector products, involve only ideally parallel computations and are therefore highly scalable. Thus the proposed approach has the potential to make time parallelism an essential ingredient for exa-scale applications. The feasibility of the proposed algorithm is studied in the context of WRF (Weather Research \& Forecast), a large-scale numerical weather prediction model. The derivative information required for optimization is obtained with the help of adjoint models. Implementation details and benefits of the new approach are discussed.
arXiv (Cornell University), Jul 18, 2013
We present an efficient computational framework to quantify the impact of individual observations... more We present an efficient computational framework to quantify the impact of individual observations in four dimensional variational data assimilation. The proposed methodology uses first and second order adjoint sensitivity analysis, together with matrix-free algorithms to obtain low-rank approximations of observation impact matrix. We illustrate the application of this methodology to important applications such as data pruning and the identification of faulty sensors for a two dimensional shallow water test system.
Social Science Research Network, Apr 28, 2015
Most economic activities are affected by environmental factors. Business intelligence needs to ta... more Most economic activities are affected by environmental factors. Business intelligence needs to take into account both historical and forecast data yielded by environmental sciences. The potential benefits are considerable but enabling them requires a multidisciplinary approach. High-quality information can only be produced through the joint expertise of heterogeneous teams of scientists and engineers, whilst scrutinizing its business-specific insights lies in the responsibility of economists and managers. This paper will introduce readers to a three-stage framework for efficiently using environmental data in research and operations. The newly defined framework integrates at each stage a cutting-edge computational technique and their compatibility guarantees a seamless data flow. The goal of the paper is to promote multidisciplinarity, interdisciplinarity and transdisciplinarity by familiarizing researchers and professionals from areas other than computer science to a set of lessknown, yet powerful instruments which can readily serve data-driven applications and decision support systems.
Dynamic data-driven applications aim to reconcile different sources of information for systems un... more Dynamic data-driven applications aim to reconcile different sources of information for systems under scrutiny. Such problems ubiquitously arise in geosciences, for applications like numerical weather prediction, climate change and green energy harvesting. One of the main challenges in solving data-driven applications come from the associated large computational cost. This article presents an adaptive computational framework for fusing numerical model predictions with real observations, in order to generate discrete initial conditions which are optimal in a certain sense. The proposed framework incorporates four-dimensional variational data assimilation, observation impact via sensitivity analysis and adaptive measurement strategies.
Inverse problems are of utmost importance in many fields of science and engineering. In the varia... more Inverse problems are of utmost importance in many fields of science and engineering. In the variational approach inverse problems are formulated as PDE-constrained optimization problems, where the optimal estimate of the uncertain parameters is the minimizer of a certain cost functional subject to the constraints posed by the model equations. The numerical solution of such optimization problems requires the computation of derivatives of the model output with respect to model parameters. The first order derivatives of a cost functional (defined on the model output) with respect to a large number of model parameters can be calculated efficiently through first order adjoint sensitivity analysis. Second order adjoint models give second derivative information in the form of matrix-vector products between the Hessian of the cost functional and user defined vectors. Traditionally, the construction of second order derivatives for large scale models has been considered too costly. Consequent...
Cahiers de Nutrition et de Diététique, 2013
Most economic activities are affected by environmental factors. Business intelligence needs to ta... more Most economic activities are affected by environmental factors. Business intelligence needs to take into account both historical and forecast data yielded by environmental sciences. The potential benefits are considerable but enabling them requires a multidisciplinary approach. High-quality information can only be produced through the joint expertise of heterogeneous teams of scientists and engineers, whilst scrutinizing its business-specific insights lies in the responsibility of economists and managers. This paper will introduce readers to a three-stage framework for efficiently using environmental data in research and operations. The newly defined framework integrates at each stage a cutting-edge computational technique and their compatibility guarantees a seamless data flow. The goal of the paper is to promote multidisciplinarity, interdisciplinarity and transdisciplinarity by familiarizing researchers and professionals from areas other than computer science to a set of lessknow...
A deep scientific understanding of complex physical systems, such as the atmosphere, can be achie... more A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimilation is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration.
Inverse problems are of utmost importance in many fields of science and engineering. In the varia... more Inverse problems are of utmost importance in many fields of science and engineering. In the variational approach inverse problems are formulated as PDE-constrained optimization problems, where the optimal estimate of the uncertain parameters is the minimizer of a certain cost functional subject to the constraints posed by the model equations. The numerical solution of such optimization problems requires the computation of derivatives of the model output with respect to model parameters. The first order derivatives of a cost functional (defined on the model output) with respect to a large number of model parameters can be calculated efficiently through first order adjoint sensitivity analysis. Second order adjoint models give second derivative information in the form of matrix-vector products between the Hessian of the cost functional and user defined vectors. Traditionally, the construction of second order derivatives for large scale models has been considered too costly. Consequent...
2015 7th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), 2015
Dynamic data-driven applications aim to reconcile different sources of information for systems un... more Dynamic data-driven applications aim to reconcile different sources of information for systems under scrutiny. Such problems ubiquitously arise in geosciences, for applications like numerical weather prediction, climate change and green energy harvesting. One of the main challenges in solving data-driven applications come from the associated large computational cost. This article presents an adaptive computational framework for fusing numerical model predictions with real observations, in order to generate discrete initial conditions which are optimal in a certain sense. The proposed framework incorporates four-dimensional variational data assimilation, observation impact via sensitivity analysis and adaptive measurement strategies.
2015 7th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), 2015
Nonlinear numerical control is rightfully considered to be one of the most difficult engineering ... more Nonlinear numerical control is rightfully considered to be one of the most difficult engineering problems to tackle, in terms of both practical implementation and time-to-solution. It requires time-stepping numerical models for simulating the trajectory of the system, adjoint models for sensitivity analysis and matrix-free iterative solvers to produce the solution field of the inverse problem. The field of high-performance computing (HPC) provides computational tools and practices which enable the deployment of numerical applications on computational clusters, supercomputers and cloud computing facilities. This article presents a set of practical methods aimed at accelerating and parallelizing computation for sensitivity analysis in a large-scale 4D-Var data assimilation setting.
Journal of Computational Physics, 2014
This paper develops a computational framework for optimizing the parameters of data assimilation ... more This paper develops a computational framework for optimizing the parameters of data assimilation systems in order to improve their performance. The approach formulates a continuous meta-optimization problem for parameters; the meta-optimization is constrained by the original data assimilation problem. The numerical solution process employs adjoint models and iterative solvers. The proposed framework is applied to optimize observation values, data weighting coefficients, and the location of sensors for a test problem. The ability to optimize a distributed measurement network is crucial for cutting down operating costs and detecting malfunctions.
Proceedings of the first international workshop on High performance computing, networking and analytics for the power grid, 2011
We present an approach to estimate adjoint sensitivities of economic metrics of relevance in the ... more We present an approach to estimate adjoint sensitivities of economic metrics of relevance in the power grid with respect to physical weather variables using numerical weather prediction models. We demonstrate that this capability can significantly enhance planning and operations. We illustrate the method using a large-scale computational study where we compute sensitivities of the regional generation cost in the state of Illinois with respect to wind speed and temperature fields inside and outside the state.
Proceedings of the 2010 Spring Simulation Multiconference, 2010
Inverse problems are of utmost importance in many fields of science and engineering. In the varia... more Inverse problems are of utmost importance in many fields of science and engineering. In the variational approach inverse problems are formulated as constrained optimization problems, where the optimal estimate of the uncertain parameters is the minimizer of a certain cost function subject to the model constraints. The numerical solution of such optimization problems requires the derivatives of a chosen cost
2012 SC Companion: High Performance Computing, Networking Storage and Analysis, 2012
ABSTRACT Efficient use of exa-scale parallel architectures will require applications to display a... more ABSTRACT Efficient use of exa-scale parallel architectures will require applications to display a tremendous degree of concurrency in order to effectively employ hundreds of thousands to millions of cores. Large scale simulations of partial differential equations typically rely on a spatial domain decomposition approach, where the number of concurrent tasks is limited by the size of the spatial simulation domain. Time parallelism offers a promising approach to increase the degree of concurrency. Parareal is a popular, non-intrusive, iterative parallel in time algorithm that uses both low and high accuracy numerical solvers. While the high accuracy solutions are computed in parallel, the low accuracy ones are serial, which considerably hinders Parareal's scalability, and therefore its potential usefulness in exa-scale environments. This paper proposes a nonlinear optimization approach to exploiting time parallelism. Like in the traditional Parareal approach, the time interval is partitioned into subdomains, and local time integrations are carried out in parallel. The objective cost function quantifies the mismatch of local solutions between adjacent time subintervals. The optimization problem is solved iteratively using gradient-based methods. The necessary gradients, and Hessian-vector products, involve only ideally parallel computations and are therefore highly scalable. Thus the proposed approach has the potential to make time parallelism an essential ingredient for exa-scale applications. The feasibility of the proposed algorithm is studied in the context of WRF (Weather Research \& Forecast), a large-scale numerical weather prediction model. The derivative information required for optimization is obtained with the help of adjoint models. Implementation details and benefits of the new approach are discussed.
Inverse problems are of utmost importance in many fields of science and engineering. In the varia... more Inverse problems are of utmost importance in many fields of science and engineering. In the variational approach inverse problems are formulated as constrained optimization problems, where the optimal estimate of the uncertain parameters is the minimizer of a certain cost function subject to the model constraints. The numerical solution of such optimization problems requires the derivatives of a chosen cost
Journal of Computational Physics, Oct 1, 2014
This paper develops a computational framework for optimizing the parameters of data assimilation ... more This paper develops a computational framework for optimizing the parameters of data assimilation systems in order to improve their performance. The approach formulates a continuous meta-optimization problem for parameters; the meta-optimization is constrained by the original data assimilation problem. The numerical solution process employs adjoint models and iterative solvers. The proposed framework is applied to optimize observation values, data weighting coefficients, and the location of sensors for a test problem. The ability to optimize a distributed measurement network is crucial for cutting down operating costs and detecting malfunctions.
Computational Geosciences, Sep 13, 2013
This paper presents a practical computational approach to quantify the effect of individual obser... more This paper presents a practical computational approach to quantify the effect of individual observations in estimating the state of a system. Such an analysis can be used for pruning redundant measurements, and for designing future sensor networks. The mathematical approach is based on computing the sensitivity of the reanalysis (unconstrained optimization solution) with respect to the data. The computational cost is dominated by the solution of a linear system, whose matrix is the Hessian of the cost function, and is only available in operator form. The right hand side is the gradient of a scalar cost function that quantifies the forecast error of the numerical model. The use of adjoint models to obtain the necessary first and second order derivatives is discussed. We study various strategies to accelerate the computation, including matrix-free iterative solvers, preconditioners, and an in-house multigrid solver. Experiments are conducted on both a small-size shallow-water equations model, and on a large-scale numerical weather prediction model, in order to illustrate the capabilities of the new methodology.
Procedia Computer Science, 2013
Data assimilation is an important data-driven application (DDDAS) where measurements of the real ... more Data assimilation is an important data-driven application (DDDAS) where measurements of the real system are used to constrain simulation results. This paper describes a methodology for dynamically configuring sensor networks in data assimilation systems based on numerical models of time-evolving differential equations. The proposed methodology uses the dominant model singular vectors, which reveal the directions of maximal error growth. New sensors are dynamically placed such as to minimize an estimation error energy norm. A shallow water test problem is used to illustrate our approach.
ABSTRACT Efficient use of exa-scale parallel architectures will require applications to display a... more ABSTRACT Efficient use of exa-scale parallel architectures will require applications to display a tremendous degree of concurrency in order to effectively employ hundreds of thousands to millions of cores. Large scale simulations of partial differential equations typically rely on a spatial domain decomposition approach, where the number of concurrent tasks is limited by the size of the spatial simulation domain. Time parallelism offers a promising approach to increase the degree of concurrency. Parareal is a popular, non-intrusive, iterative parallel in time algorithm that uses both low and high accuracy numerical solvers. While the high accuracy solutions are computed in parallel, the low accuracy ones are serial, which considerably hinders Parareal's scalability, and therefore its potential usefulness in exa-scale environments. This paper proposes a nonlinear optimization approach to exploiting time parallelism. Like in the traditional Parareal approach, the time interval is partitioned into subdomains, and local time integrations are carried out in parallel. The objective cost function quantifies the mismatch of local solutions between adjacent time subintervals. The optimization problem is solved iteratively using gradient-based methods. The necessary gradients, and Hessian-vector products, involve only ideally parallel computations and are therefore highly scalable. Thus the proposed approach has the potential to make time parallelism an essential ingredient for exa-scale applications. The feasibility of the proposed algorithm is studied in the context of WRF (Weather Research \& Forecast), a large-scale numerical weather prediction model. The derivative information required for optimization is obtained with the help of adjoint models. Implementation details and benefits of the new approach are discussed.
arXiv (Cornell University), Jul 18, 2013
We present an efficient computational framework to quantify the impact of individual observations... more We present an efficient computational framework to quantify the impact of individual observations in four dimensional variational data assimilation. The proposed methodology uses first and second order adjoint sensitivity analysis, together with matrix-free algorithms to obtain low-rank approximations of observation impact matrix. We illustrate the application of this methodology to important applications such as data pruning and the identification of faulty sensors for a two dimensional shallow water test system.
Social Science Research Network, Apr 28, 2015
Most economic activities are affected by environmental factors. Business intelligence needs to ta... more Most economic activities are affected by environmental factors. Business intelligence needs to take into account both historical and forecast data yielded by environmental sciences. The potential benefits are considerable but enabling them requires a multidisciplinary approach. High-quality information can only be produced through the joint expertise of heterogeneous teams of scientists and engineers, whilst scrutinizing its business-specific insights lies in the responsibility of economists and managers. This paper will introduce readers to a three-stage framework for efficiently using environmental data in research and operations. The newly defined framework integrates at each stage a cutting-edge computational technique and their compatibility guarantees a seamless data flow. The goal of the paper is to promote multidisciplinarity, interdisciplinarity and transdisciplinarity by familiarizing researchers and professionals from areas other than computer science to a set of lessknown, yet powerful instruments which can readily serve data-driven applications and decision support systems.
Dynamic data-driven applications aim to reconcile different sources of information for systems un... more Dynamic data-driven applications aim to reconcile different sources of information for systems under scrutiny. Such problems ubiquitously arise in geosciences, for applications like numerical weather prediction, climate change and green energy harvesting. One of the main challenges in solving data-driven applications come from the associated large computational cost. This article presents an adaptive computational framework for fusing numerical model predictions with real observations, in order to generate discrete initial conditions which are optimal in a certain sense. The proposed framework incorporates four-dimensional variational data assimilation, observation impact via sensitivity analysis and adaptive measurement strategies.
Inverse problems are of utmost importance in many fields of science and engineering. In the varia... more Inverse problems are of utmost importance in many fields of science and engineering. In the variational approach inverse problems are formulated as PDE-constrained optimization problems, where the optimal estimate of the uncertain parameters is the minimizer of a certain cost functional subject to the constraints posed by the model equations. The numerical solution of such optimization problems requires the computation of derivatives of the model output with respect to model parameters. The first order derivatives of a cost functional (defined on the model output) with respect to a large number of model parameters can be calculated efficiently through first order adjoint sensitivity analysis. Second order adjoint models give second derivative information in the form of matrix-vector products between the Hessian of the cost functional and user defined vectors. Traditionally, the construction of second order derivatives for large scale models has been considered too costly. Consequent...
Cahiers de Nutrition et de Diététique, 2013
Most economic activities are affected by environmental factors. Business intelligence needs to ta... more Most economic activities are affected by environmental factors. Business intelligence needs to take into account both historical and forecast data yielded by environmental sciences. The potential benefits are considerable but enabling them requires a multidisciplinary approach. High-quality information can only be produced through the joint expertise of heterogeneous teams of scientists and engineers, whilst scrutinizing its business-specific insights lies in the responsibility of economists and managers. This paper will introduce readers to a three-stage framework for efficiently using environmental data in research and operations. The newly defined framework integrates at each stage a cutting-edge computational technique and their compatibility guarantees a seamless data flow. The goal of the paper is to promote multidisciplinarity, interdisciplinarity and transdisciplinarity by familiarizing researchers and professionals from areas other than computer science to a set of lessknow...
A deep scientific understanding of complex physical systems, such as the atmosphere, can be achie... more A deep scientific understanding of complex physical systems, such as the atmosphere, can be achieved neither by direct measurements nor by numerical simulations alone. Data assimilation is a rigorous procedure to fuse information from a priori knowledge of the system state, the physical laws governing the evolution of the system, and real measurements, all with associated error statistics. Data assimilation produces best (a posteriori) estimates of model states and parameter values, and results in considerably improved computer simulations. The acquisition and use of observations in data assimilation raises several important scientific questions related to optimal sensor network design, quantification of data impact, pruning redundant data, and identifying the most beneficial additional observations. These questions originate in operational data assimilation practice, and have started to attract considerable interest in the recent past. This dissertation advances the state of knowledge in four dimensional variational (4D-Var) data assimilation by developing, implementing, and validating a novel computational framework for estimating observation impact and for optimizing sensor networks. The framework builds on the powerful methodologies of second-order adjoint modeling and the 4D-Var sensitivity equations. Efficient computational approaches for quantifying the observation impact include matrix free linear algebra algorithms and low-rank approximations of the sensitivities to observations. The sensor network configuration problem is formulated as a meta-optimization problem. Best values for parameters such as sensor location are obtained by optimizing a performance criterion, subject to the constraint posed by the 4D-Var optimization. Tractable computational solutions to this "optimization-constrained" optimization problem are provided. The results of this work can be directly applied to the deployment of intelligent sensors and adaptive observations, as well as to reducing the operating costs of measuring networks, while preserving their ability to capture the essential features of the system under consideration.
Inverse problems are of utmost importance in many fields of science and engineering. In the varia... more Inverse problems are of utmost importance in many fields of science and engineering. In the variational approach inverse problems are formulated as PDE-constrained optimization problems, where the optimal estimate of the uncertain parameters is the minimizer of a certain cost functional subject to the constraints posed by the model equations. The numerical solution of such optimization problems requires the computation of derivatives of the model output with respect to model parameters. The first order derivatives of a cost functional (defined on the model output) with respect to a large number of model parameters can be calculated efficiently through first order adjoint sensitivity analysis. Second order adjoint models give second derivative information in the form of matrix-vector products between the Hessian of the cost functional and user defined vectors. Traditionally, the construction of second order derivatives for large scale models has been considered too costly. Consequent...
2015 7th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), 2015
Dynamic data-driven applications aim to reconcile different sources of information for systems un... more Dynamic data-driven applications aim to reconcile different sources of information for systems under scrutiny. Such problems ubiquitously arise in geosciences, for applications like numerical weather prediction, climate change and green energy harvesting. One of the main challenges in solving data-driven applications come from the associated large computational cost. This article presents an adaptive computational framework for fusing numerical model predictions with real observations, in order to generate discrete initial conditions which are optimal in a certain sense. The proposed framework incorporates four-dimensional variational data assimilation, observation impact via sensitivity analysis and adaptive measurement strategies.
2015 7th International Conference on Electronics, Computers and Artificial Intelligence (ECAI), 2015
Nonlinear numerical control is rightfully considered to be one of the most difficult engineering ... more Nonlinear numerical control is rightfully considered to be one of the most difficult engineering problems to tackle, in terms of both practical implementation and time-to-solution. It requires time-stepping numerical models for simulating the trajectory of the system, adjoint models for sensitivity analysis and matrix-free iterative solvers to produce the solution field of the inverse problem. The field of high-performance computing (HPC) provides computational tools and practices which enable the deployment of numerical applications on computational clusters, supercomputers and cloud computing facilities. This article presents a set of practical methods aimed at accelerating and parallelizing computation for sensitivity analysis in a large-scale 4D-Var data assimilation setting.
Journal of Computational Physics, 2014
This paper develops a computational framework for optimizing the parameters of data assimilation ... more This paper develops a computational framework for optimizing the parameters of data assimilation systems in order to improve their performance. The approach formulates a continuous meta-optimization problem for parameters; the meta-optimization is constrained by the original data assimilation problem. The numerical solution process employs adjoint models and iterative solvers. The proposed framework is applied to optimize observation values, data weighting coefficients, and the location of sensors for a test problem. The ability to optimize a distributed measurement network is crucial for cutting down operating costs and detecting malfunctions.
Proceedings of the first international workshop on High performance computing, networking and analytics for the power grid, 2011
We present an approach to estimate adjoint sensitivities of economic metrics of relevance in the ... more We present an approach to estimate adjoint sensitivities of economic metrics of relevance in the power grid with respect to physical weather variables using numerical weather prediction models. We demonstrate that this capability can significantly enhance planning and operations. We illustrate the method using a large-scale computational study where we compute sensitivities of the regional generation cost in the state of Illinois with respect to wind speed and temperature fields inside and outside the state.
Proceedings of the 2010 Spring Simulation Multiconference, 2010
Inverse problems are of utmost importance in many fields of science and engineering. In the varia... more Inverse problems are of utmost importance in many fields of science and engineering. In the variational approach inverse problems are formulated as constrained optimization problems, where the optimal estimate of the uncertain parameters is the minimizer of a certain cost function subject to the model constraints. The numerical solution of such optimization problems requires the derivatives of a chosen cost
2012 SC Companion: High Performance Computing, Networking Storage and Analysis, 2012
ABSTRACT Efficient use of exa-scale parallel architectures will require applications to display a... more ABSTRACT Efficient use of exa-scale parallel architectures will require applications to display a tremendous degree of concurrency in order to effectively employ hundreds of thousands to millions of cores. Large scale simulations of partial differential equations typically rely on a spatial domain decomposition approach, where the number of concurrent tasks is limited by the size of the spatial simulation domain. Time parallelism offers a promising approach to increase the degree of concurrency. Parareal is a popular, non-intrusive, iterative parallel in time algorithm that uses both low and high accuracy numerical solvers. While the high accuracy solutions are computed in parallel, the low accuracy ones are serial, which considerably hinders Parareal's scalability, and therefore its potential usefulness in exa-scale environments. This paper proposes a nonlinear optimization approach to exploiting time parallelism. Like in the traditional Parareal approach, the time interval is partitioned into subdomains, and local time integrations are carried out in parallel. The objective cost function quantifies the mismatch of local solutions between adjacent time subintervals. The optimization problem is solved iteratively using gradient-based methods. The necessary gradients, and Hessian-vector products, involve only ideally parallel computations and are therefore highly scalable. Thus the proposed approach has the potential to make time parallelism an essential ingredient for exa-scale applications. The feasibility of the proposed algorithm is studied in the context of WRF (Weather Research \& Forecast), a large-scale numerical weather prediction model. The derivative information required for optimization is obtained with the help of adjoint models. Implementation details and benefits of the new approach are discussed.