Bernard Haasdonk - Academia.edu (original) (raw)
Papers by Bernard Haasdonk
arXiv (Cornell University), May 15, 2021
Kernel based methods yield approximation models that are flexible, efficient and powerful. In par... more Kernel based methods yield approximation models that are flexible, efficient and powerful. In particular, they utilize fixed feature maps of the data, being often associated to strong analytical results that prove their accuracy. On the other hand, the recent success of machine learning methods has been driven by deep neural networks (NNs). They achieve a significant accuracy on very high-dimensional data, in that they are able to learn also efficient data representations or data-based feature maps. In this paper, we leverage a recent deep kernel representer theorem to connect the two approaches and understand their interplay. In particular, we show that the use of special types of kernels yield models reminiscent of neural networks that are founded in the same theoretical framework of classical kernel methods, while enjoying many computational properties of deep neural networks. Especially the introduced Structured Deep Kernel Networks (SDKNs) can be viewed as neural networks with optimizable activation functions obeying a representer theorem. Analytic properties show their universal approximation properties in different asymptotic regimes of unbounded number of centers, width and depth. Especially in the case of unbounded depth, the constructions is asymptotically better than corresponding constructions for ReLU neural networks, which is made possible by the flexibility of kernel approximation. Keywords Kernel methods • Neural networks • Deep learning • Representer theorem • Universal approximation
Constructive Approximation
Data-dependent greedy algorithms in kernel spaces are known to provide fast converging interpolan... more Data-dependent greedy algorithms in kernel spaces are known to provide fast converging interpolants, while being extremely easy to implement and efficient to run. Despite this experimental evidence, no detailed theory has yet been presented. This situation is unsatisfactory, especially when compared to the case of the data-independent P-greedy algorithm, for which optimal convergence rates are available, despite its performances being usually inferior to the ones of target data-dependent algorithms. In this work, we fill this gap by first defining a new scale of greedy algorithms for interpolation that comprises all the existing ones in a unique analysis, where the degree of dependency of the selection criterion on the functional data is quantified by a real parameter. We then prove new convergence rates where this degree is taken into account, and we show that, possibly up to a logarithmic factor, target data-dependent selection strategies provide faster convergence. In particular,...
arXiv (Cornell University), Jul 24, 2019
This chapter deals with kernel methods as a special class of techniques for surrogate modeling. K... more This chapter deals with kernel methods as a special class of techniques for surrogate modeling. Kernel methods have proven to be efficient in machine learning, pattern recognition and signal analysis due to their flexibility, excellent experimental performance and elegant functional analytic background. These data-based techniques provide so called kernel expansions, i.e., linear combinations of kernel functions which are generated from given input-output point samples that may be arbitrarily scattered. In particular, these techniques are meshless, do not require or depend on a grid, hence are less prone to the curse of dimensionality, even for high-dimensional problems. In contrast to projection-based model reduction, we do not necessarily assume a high-dimensional model, but a general function that models input-output behavior within some simulation context. This could be some micro-model in a multiscale-simulation, some submodel in a coupled system, some initialization function for solvers, coefficient function in Partial Differential Equations (PDEs), etc. First, kernel surrogates can be useful if the input-output function is expensive to evaluate, e.g. is a result of a finite element simulation. Here, acceleration can be obtained by sparse kernel expansions. Second, if a function is available only via measurements or a few function evaluation samples, kernel approximation techniques can provide function surrogates that allow global evaluation. We present some important kernel approximation techniques, which are kernel interpolation, greedy kernel approximation and support vector regression. Pseudo-code is provided for ease of reproducibility. In order to illustrate the main features, commonalities and differences, we compare these techniques on a real-world application. The experiments clearly indicate the enormous acceleration potential.
arXiv (Cornell University), Jul 28, 2022
We consider the meshless solution of PDEs via symmetric kernel collocation by using greedy kernel... more We consider the meshless solution of PDEs via symmetric kernel collocation by using greedy kernel methods. In this way we avoid the need for mesh generation, which can be challenging for non-standard domains or manifolds. We introduce and discuss different kind of greedy selection criteria, such as the PDE-P-greedy and the PDE-f-greedy for collocation point selection. Subsequently we analyze the convergence rates of these algorithms and provide bounds on the approximation error in terms of the number of greedily selected points. Especially we prove that target-data dependent algorithms, i.e. those using knowledge of the right hand side functions of the PDE, exhibit faster convergence rates. The provided analysis is applicable to PDEs both on domains and manifolds. This fact and the advantages of target-data dependent algorithms are highlighted by numerical examples.
MATHMOD 2022 Discussion Contribution Volume
arXiv (Cornell University), Mar 18, 2022
A fluid-structure interaction model in a port-Hamiltonian representation is derived for a classic... more A fluid-structure interaction model in a port-Hamiltonian representation is derived for a classical guitar. We combine the laws of continuum mechanics for solids and fluids within a unified port-Hamiltonian (pH) modeling approach by adapting the discretized equations on second-order level in order to obtain a damped multi-physics model. The high-dimensionality of the resulting system is reduced by model order reduction. The article focuses on pH-systems in different state transformations, a variety of basis generation techniques as well as structurepreserving model order reduction approaches that are independent from the projection basis. As main contribution a thorough comparison of these method combinations is conducted. In contrast to typical frequency-based simulations in acoustics, transient time simulations of the system are presented. The approach is embedded into a straightforward workflow of sophisticated commercial software modeling and flexible in-house software for multi-physics coupling and model order reduction.
arXiv (Cornell University), Mar 23, 2022
Error estimates for kernel interpolation in Reproducing Kernel Hilbert Spaces (RKHS) usually assu... more Error estimates for kernel interpolation in Reproducing Kernel Hilbert Spaces (RKHS) usually assume quite restrictive properties on the shape of the domain, especially in the case of infinitely smooth kernels like the popular Gaussian kernel. In this paper we leverage an analysis of greedy kernel algorithms to prove that it is possible to obtain convergence results (in the number of interpolation points) for kernel interpolation for arbitrary domains Ω ⊂ R d , thus allowing for non-Lipschitz domains including e.g. cusps and irregular boundaries. Especially we show that, when going to a smaller domaiñ Ω ⊂ Ω ⊂ R d , the convergence rate does not deteriorate-i.e. the convergence rates are stable with respect to going to a subset. The impact of this result is explained on the examples of kernels of finite as well as infinite smoothness like the Gaussian kernel. A comparison to approximation in Sobolev spaces is drawn, where the shape of the domain Ω has an impact on the approximation properties. Numerical experiments illustrate and confirm the experiments.
Lecture Notes in Computational Science and Engineering, 2020
Greedy kernel approximation algorithms are successful techniques for sparse and accurate data-bas... more Greedy kernel approximation algorithms are successful techniques for sparse and accurate data-based modelling and function approximation. Based on a recent idea of stabilization [11] of such algorithms in the scalar output case, we here consider the vectorial extension built on VKOGA [12]. We introduce the so called γ-restricted VKOGA, comment on analytical properties and present numerical evaluation on data from a clinically relevant application, the modelling of the human spine. The experiments show that the new stabilized algorithms result in improved accuracy and stability over the non-stabilized algorithms.
Model Reduction of Parametrized Systems, 2017
We investigate feedback control for infinite horizon optimal control problems for partial differe... more We investigate feedback control for infinite horizon optimal control problems for partial differential equations. The method is based on the coupling between Hamilton-Jacobi-Bellman (HJB) equations and model reduction techniques. It is well-known that HJB equations suffer the so called curse of dimensionality and, therefore, a reduction of the dimension of the system is mandatory. In this report we focus on the infinite horizon optimal control problem with quadratic cost functionals. We compare several model reduction methods such as Proper Orthogonal Decomposition, Balanced Truncation and a new algebraic Riccati equation based approach. Finally, we present numerical examples and discuss several features of the different methods analyzing advantages and disadvantages of the reduction methods.
Model reduction of evolution problems on parametrized geometries µ µ µ = 0
Reduced-Order Modeling (ROM) for Simulation and Optimization, 2018
Modern simulation scenarios frequently require multi-query or real-time responses of simulation m... more Modern simulation scenarios frequently require multi-query or real-time responses of simulation models for statistical analysis, optimization, or process control. However, the underlying simulation models may be very time-consuming rendering the simulation task difficult or infeasible. This motivates the need for rapidly computable surrogate models. We address the case of surrogate modeling of functions from vectorial input to vectorial output spaces. These appear, for instance, in simulation of coupled models or in the case of approximating general input-output maps. We review some recent methods and theoretical results in the field of greedy kernel approximation schemes. In particular, we recall the vectorial kernel orthogonal greedy algorithm (VKOGA) for approximating vector-valued functions. We collect some recent convergence statements that provide sound foundation for these algorithms, in particular quasi-optimal convergence rates in case of kernels inducing Sobolev spaces. We provide some initial experiments that can be obtained with nonsymmetric greedy kernel approximation schemes. The results indicate better stability and overall more accurate models in situations where the input data locations are not equally distributed.
Lecture Notes in Computational Science and Engineering, 2020
Computational Kinematics, 2017
Kinematics and dynamics of cable-driven parallel robots are affected by the cables used as force ... more Kinematics and dynamics of cable-driven parallel robots are affected by the cables used as force and motion transmitting elements. Flexural rigidity of these cables is of major interest to better understand dynamics of these systems and to improve their accuracy. The approach for modeling spatial cable dynamics, as presented in this paper, is based on the modified rigid-finite element method using rigid bodies and springdamper elements. With this, a simulation of a planar 3 degrees of freedom cable-driven parallel robot is constructed as a multi-body dynamics model. Under consideration of holonomic constraints and Baumgarte stabilization, a simulation framework for the simulation of cable-driven parallel robots including dynamics of the cables is developed and presented.
Dolomites Research Notes on Approximation, 2016
Kernel-based methods provide flexible and accurate algorithms for the reconstruction of functions... more Kernel-based methods provide flexible and accurate algorithms for the reconstruction of functions from meshless samples. A major question in the use of such methods is the influence of the samples locations on the behavior of the approximation, and feasible optimal strategies are not known for general problems. Nevertheless, efficient and greedy point-selection strategies are known. This paper gives a proof of the convergence rate of the data-independent \textit{$P$-greedy} algorithm, based on the application of the convergence theory for greedy algorithms in reduced basis methods. The resulting rate of convergence is shown to be near-optimal in the case of kernels generating Sobolev spaces. As a consequence, this convergence rate proves that, for kernels of Sobolev spaces, the points selected by the algorithm are asymptotically uniformly distributed, as conjectured in the paper where the algorithm has been introduced.
arXiv: Dynamical Systems, 2020
For certain dynamical systems it is possible to significantly simplify the study of stability by ... more For certain dynamical systems it is possible to significantly simplify the study of stability by means of the center manifold theory. This theory allows to isolate the complicated asymptotic behavior of the system close to a non-hyperbolic equilibrium point, and to obtain meaningful predictions of its behavior by analyzing a reduced dimensional problem. Since the manifold is usually not known, approximation methods are of great interest to obtain qualitative estimates. In this work, we use a data-based greedy kernel method to construct a suitable approximation of the manifold close to the equilibrium. The data are collected by repeated numerical simulation of the full system by means of a high-accuracy solver, which generates sets of discrete trajectories that are then used to construct a surrogate model of the manifold. The method is tested on different examples which show promising performance and good accuracy.
Standard kernel methods for machine learning usually struggle when dealing with large datasets. W... more Standard kernel methods for machine learning usually struggle when dealing with large datasets. We review a recently introduced Structured Deep Kernel Network (SDKN) approach that is capable of dealing with high-dimensional and huge datasets and enjoys typical standard machine learning approximation properties. We extend the SDKN to combine it with standard machine learning modules and compare it with Neural Networks on the scientific challenge of data-driven prediction of closure terms of turbulent flows. We show experimentally that the SDKNs are capable of dealing with large datasets and achieve near-perfect accuracy on the given application.
IUTAM Symposium on Model Order Reduction of Coupled Systems, Stuttgart, Germany, May 22–25, 2018, 2019
We consider the equation of motion of an elastic multibody system in absolute coordinate formulat... more We consider the equation of motion of an elastic multibody system in absolute coordinate formulation (ACF). The resulting nonlinear second order DAE of index two has a unique solution and is reduced using the strong POD-greedy method. The reduced model is certified by deriving a posteriori error estimators, which are independent of the model order reduction (MOR) method used to obtain the projection basis. The first error estimation technique, which we establish in this paper, is a first order linear integro-differential equation. It relies on the gradient of a function and can be integrated along with the reduced simulation (in-situ) . The second error estimation technique is hierarchical and requires a more enriched basis in order to estimate the error in the solution due to a coarser basis. To verify and illustrate the efficacy of the estimators, reproductive and predictive numerical experiments are performed on a coupled elastic multibody system consisting of a double elastic pe...
arXiv (Cornell University), May 15, 2021
Kernel based methods yield approximation models that are flexible, efficient and powerful. In par... more Kernel based methods yield approximation models that are flexible, efficient and powerful. In particular, they utilize fixed feature maps of the data, being often associated to strong analytical results that prove their accuracy. On the other hand, the recent success of machine learning methods has been driven by deep neural networks (NNs). They achieve a significant accuracy on very high-dimensional data, in that they are able to learn also efficient data representations or data-based feature maps. In this paper, we leverage a recent deep kernel representer theorem to connect the two approaches and understand their interplay. In particular, we show that the use of special types of kernels yield models reminiscent of neural networks that are founded in the same theoretical framework of classical kernel methods, while enjoying many computational properties of deep neural networks. Especially the introduced Structured Deep Kernel Networks (SDKNs) can be viewed as neural networks with optimizable activation functions obeying a representer theorem. Analytic properties show their universal approximation properties in different asymptotic regimes of unbounded number of centers, width and depth. Especially in the case of unbounded depth, the constructions is asymptotically better than corresponding constructions for ReLU neural networks, which is made possible by the flexibility of kernel approximation. Keywords Kernel methods • Neural networks • Deep learning • Representer theorem • Universal approximation
Constructive Approximation
Data-dependent greedy algorithms in kernel spaces are known to provide fast converging interpolan... more Data-dependent greedy algorithms in kernel spaces are known to provide fast converging interpolants, while being extremely easy to implement and efficient to run. Despite this experimental evidence, no detailed theory has yet been presented. This situation is unsatisfactory, especially when compared to the case of the data-independent P-greedy algorithm, for which optimal convergence rates are available, despite its performances being usually inferior to the ones of target data-dependent algorithms. In this work, we fill this gap by first defining a new scale of greedy algorithms for interpolation that comprises all the existing ones in a unique analysis, where the degree of dependency of the selection criterion on the functional data is quantified by a real parameter. We then prove new convergence rates where this degree is taken into account, and we show that, possibly up to a logarithmic factor, target data-dependent selection strategies provide faster convergence. In particular,...
arXiv (Cornell University), Jul 24, 2019
This chapter deals with kernel methods as a special class of techniques for surrogate modeling. K... more This chapter deals with kernel methods as a special class of techniques for surrogate modeling. Kernel methods have proven to be efficient in machine learning, pattern recognition and signal analysis due to their flexibility, excellent experimental performance and elegant functional analytic background. These data-based techniques provide so called kernel expansions, i.e., linear combinations of kernel functions which are generated from given input-output point samples that may be arbitrarily scattered. In particular, these techniques are meshless, do not require or depend on a grid, hence are less prone to the curse of dimensionality, even for high-dimensional problems. In contrast to projection-based model reduction, we do not necessarily assume a high-dimensional model, but a general function that models input-output behavior within some simulation context. This could be some micro-model in a multiscale-simulation, some submodel in a coupled system, some initialization function for solvers, coefficient function in Partial Differential Equations (PDEs), etc. First, kernel surrogates can be useful if the input-output function is expensive to evaluate, e.g. is a result of a finite element simulation. Here, acceleration can be obtained by sparse kernel expansions. Second, if a function is available only via measurements or a few function evaluation samples, kernel approximation techniques can provide function surrogates that allow global evaluation. We present some important kernel approximation techniques, which are kernel interpolation, greedy kernel approximation and support vector regression. Pseudo-code is provided for ease of reproducibility. In order to illustrate the main features, commonalities and differences, we compare these techniques on a real-world application. The experiments clearly indicate the enormous acceleration potential.
arXiv (Cornell University), Jul 28, 2022
We consider the meshless solution of PDEs via symmetric kernel collocation by using greedy kernel... more We consider the meshless solution of PDEs via symmetric kernel collocation by using greedy kernel methods. In this way we avoid the need for mesh generation, which can be challenging for non-standard domains or manifolds. We introduce and discuss different kind of greedy selection criteria, such as the PDE-P-greedy and the PDE-f-greedy for collocation point selection. Subsequently we analyze the convergence rates of these algorithms and provide bounds on the approximation error in terms of the number of greedily selected points. Especially we prove that target-data dependent algorithms, i.e. those using knowledge of the right hand side functions of the PDE, exhibit faster convergence rates. The provided analysis is applicable to PDEs both on domains and manifolds. This fact and the advantages of target-data dependent algorithms are highlighted by numerical examples.
MATHMOD 2022 Discussion Contribution Volume
arXiv (Cornell University), Mar 18, 2022
A fluid-structure interaction model in a port-Hamiltonian representation is derived for a classic... more A fluid-structure interaction model in a port-Hamiltonian representation is derived for a classical guitar. We combine the laws of continuum mechanics for solids and fluids within a unified port-Hamiltonian (pH) modeling approach by adapting the discretized equations on second-order level in order to obtain a damped multi-physics model. The high-dimensionality of the resulting system is reduced by model order reduction. The article focuses on pH-systems in different state transformations, a variety of basis generation techniques as well as structurepreserving model order reduction approaches that are independent from the projection basis. As main contribution a thorough comparison of these method combinations is conducted. In contrast to typical frequency-based simulations in acoustics, transient time simulations of the system are presented. The approach is embedded into a straightforward workflow of sophisticated commercial software modeling and flexible in-house software for multi-physics coupling and model order reduction.
arXiv (Cornell University), Mar 23, 2022
Error estimates for kernel interpolation in Reproducing Kernel Hilbert Spaces (RKHS) usually assu... more Error estimates for kernel interpolation in Reproducing Kernel Hilbert Spaces (RKHS) usually assume quite restrictive properties on the shape of the domain, especially in the case of infinitely smooth kernels like the popular Gaussian kernel. In this paper we leverage an analysis of greedy kernel algorithms to prove that it is possible to obtain convergence results (in the number of interpolation points) for kernel interpolation for arbitrary domains Ω ⊂ R d , thus allowing for non-Lipschitz domains including e.g. cusps and irregular boundaries. Especially we show that, when going to a smaller domaiñ Ω ⊂ Ω ⊂ R d , the convergence rate does not deteriorate-i.e. the convergence rates are stable with respect to going to a subset. The impact of this result is explained on the examples of kernels of finite as well as infinite smoothness like the Gaussian kernel. A comparison to approximation in Sobolev spaces is drawn, where the shape of the domain Ω has an impact on the approximation properties. Numerical experiments illustrate and confirm the experiments.
Lecture Notes in Computational Science and Engineering, 2020
Greedy kernel approximation algorithms are successful techniques for sparse and accurate data-bas... more Greedy kernel approximation algorithms are successful techniques for sparse and accurate data-based modelling and function approximation. Based on a recent idea of stabilization [11] of such algorithms in the scalar output case, we here consider the vectorial extension built on VKOGA [12]. We introduce the so called γ-restricted VKOGA, comment on analytical properties and present numerical evaluation on data from a clinically relevant application, the modelling of the human spine. The experiments show that the new stabilized algorithms result in improved accuracy and stability over the non-stabilized algorithms.
Model Reduction of Parametrized Systems, 2017
We investigate feedback control for infinite horizon optimal control problems for partial differe... more We investigate feedback control for infinite horizon optimal control problems for partial differential equations. The method is based on the coupling between Hamilton-Jacobi-Bellman (HJB) equations and model reduction techniques. It is well-known that HJB equations suffer the so called curse of dimensionality and, therefore, a reduction of the dimension of the system is mandatory. In this report we focus on the infinite horizon optimal control problem with quadratic cost functionals. We compare several model reduction methods such as Proper Orthogonal Decomposition, Balanced Truncation and a new algebraic Riccati equation based approach. Finally, we present numerical examples and discuss several features of the different methods analyzing advantages and disadvantages of the reduction methods.
Model reduction of evolution problems on parametrized geometries µ µ µ = 0
Reduced-Order Modeling (ROM) for Simulation and Optimization, 2018
Modern simulation scenarios frequently require multi-query or real-time responses of simulation m... more Modern simulation scenarios frequently require multi-query or real-time responses of simulation models for statistical analysis, optimization, or process control. However, the underlying simulation models may be very time-consuming rendering the simulation task difficult or infeasible. This motivates the need for rapidly computable surrogate models. We address the case of surrogate modeling of functions from vectorial input to vectorial output spaces. These appear, for instance, in simulation of coupled models or in the case of approximating general input-output maps. We review some recent methods and theoretical results in the field of greedy kernel approximation schemes. In particular, we recall the vectorial kernel orthogonal greedy algorithm (VKOGA) for approximating vector-valued functions. We collect some recent convergence statements that provide sound foundation for these algorithms, in particular quasi-optimal convergence rates in case of kernels inducing Sobolev spaces. We provide some initial experiments that can be obtained with nonsymmetric greedy kernel approximation schemes. The results indicate better stability and overall more accurate models in situations where the input data locations are not equally distributed.
Lecture Notes in Computational Science and Engineering, 2020
Computational Kinematics, 2017
Kinematics and dynamics of cable-driven parallel robots are affected by the cables used as force ... more Kinematics and dynamics of cable-driven parallel robots are affected by the cables used as force and motion transmitting elements. Flexural rigidity of these cables is of major interest to better understand dynamics of these systems and to improve their accuracy. The approach for modeling spatial cable dynamics, as presented in this paper, is based on the modified rigid-finite element method using rigid bodies and springdamper elements. With this, a simulation of a planar 3 degrees of freedom cable-driven parallel robot is constructed as a multi-body dynamics model. Under consideration of holonomic constraints and Baumgarte stabilization, a simulation framework for the simulation of cable-driven parallel robots including dynamics of the cables is developed and presented.
Dolomites Research Notes on Approximation, 2016
Kernel-based methods provide flexible and accurate algorithms for the reconstruction of functions... more Kernel-based methods provide flexible and accurate algorithms for the reconstruction of functions from meshless samples. A major question in the use of such methods is the influence of the samples locations on the behavior of the approximation, and feasible optimal strategies are not known for general problems. Nevertheless, efficient and greedy point-selection strategies are known. This paper gives a proof of the convergence rate of the data-independent \textit{$P$-greedy} algorithm, based on the application of the convergence theory for greedy algorithms in reduced basis methods. The resulting rate of convergence is shown to be near-optimal in the case of kernels generating Sobolev spaces. As a consequence, this convergence rate proves that, for kernels of Sobolev spaces, the points selected by the algorithm are asymptotically uniformly distributed, as conjectured in the paper where the algorithm has been introduced.
arXiv: Dynamical Systems, 2020
For certain dynamical systems it is possible to significantly simplify the study of stability by ... more For certain dynamical systems it is possible to significantly simplify the study of stability by means of the center manifold theory. This theory allows to isolate the complicated asymptotic behavior of the system close to a non-hyperbolic equilibrium point, and to obtain meaningful predictions of its behavior by analyzing a reduced dimensional problem. Since the manifold is usually not known, approximation methods are of great interest to obtain qualitative estimates. In this work, we use a data-based greedy kernel method to construct a suitable approximation of the manifold close to the equilibrium. The data are collected by repeated numerical simulation of the full system by means of a high-accuracy solver, which generates sets of discrete trajectories that are then used to construct a surrogate model of the manifold. The method is tested on different examples which show promising performance and good accuracy.
Standard kernel methods for machine learning usually struggle when dealing with large datasets. W... more Standard kernel methods for machine learning usually struggle when dealing with large datasets. We review a recently introduced Structured Deep Kernel Network (SDKN) approach that is capable of dealing with high-dimensional and huge datasets and enjoys typical standard machine learning approximation properties. We extend the SDKN to combine it with standard machine learning modules and compare it with Neural Networks on the scientific challenge of data-driven prediction of closure terms of turbulent flows. We show experimentally that the SDKNs are capable of dealing with large datasets and achieve near-perfect accuracy on the given application.
IUTAM Symposium on Model Order Reduction of Coupled Systems, Stuttgart, Germany, May 22–25, 2018, 2019
We consider the equation of motion of an elastic multibody system in absolute coordinate formulat... more We consider the equation of motion of an elastic multibody system in absolute coordinate formulation (ACF). The resulting nonlinear second order DAE of index two has a unique solution and is reduced using the strong POD-greedy method. The reduced model is certified by deriving a posteriori error estimators, which are independent of the model order reduction (MOR) method used to obtain the projection basis. The first error estimation technique, which we establish in this paper, is a first order linear integro-differential equation. It relies on the gradient of a function and can be integrated along with the reduced simulation (in-situ) . The second error estimation technique is hierarchical and requires a more enriched basis in order to estimate the error in the solution due to a coarser basis. To verify and illustrate the efficacy of the estimators, reproductive and predictive numerical experiments are performed on a coupled elastic multibody system consisting of a double elastic pe...