Adaptive meshfree solution of linear partial differential equations with PDE-greedy kernel methods (original) (raw)
Related papers
On the versatility of meshless kernel methods
2003
Under very weak conditions any well–posed linear problem of Applied Analysis can be solved by certain meshless kernel methods to any prescribed accuracy. 1 Linear Problems The fairly general statement made in the abstract needs some specification. We assume a problem to be posed that is solved by a function u in some Hilbert space U with inner product (·, ·)U. Note that this is satisfied for all problems that can be formulated in Sobolev spaces, for instance, but we also allow problems with strong solutions in Hilbert subspaces of differentiable or Hölder continuous functions. The elements of U are viewed as functions, and the elements λ ∈ U ∗ are continuous linear functionals that we use to describe data λ(u) of u, e.g. evaluations u ↦ → δx(u): = u(x) or u ↦ → (δx ◦ ∆)(u) = (∆u)(x). The problems should be formulated by requiring that a (usually uncountable) set Λ of functionals, when applied to the solution u attains certain prescribed values. This means that u solves λ(u) = f(λ) f...
Convergence rate of the data-independent P-greedy algorithm in kernel-based approximation
Dolomites Research Notes on Approximation, 2016
Kernel-based methods provide flexible and accurate algorithms for the reconstruction of functions from meshless samples. A major question in the use of such methods is the influence of the samples locations on the behavior of the approximation, and feasible optimal strategies are not known for general problems. Nevertheless, efficient and greedy point-selection strategies are known. This paper gives a proof of the convergence rate of the data-independent \textit{$P$-greedy} algorithm, based on the application of the convergence theory for greedy algorithms in reduced basis methods. The resulting rate of convergence is shown to be near-optimal in the case of kernels generating Sobolev spaces. As a consequence, this convergence rate proves that, for kernels of Sobolev spaces, the points selected by the algorithm are asymptotically uniformly distributed, as conjectured in the paper where the algorithm has been introduced.
Kernel techniques: From machine learning to meshless methods
Acta Numerica, 2006
Kernels are valuable tools in various fields of Numerical Analysis, including approximation, interpolation, meshless methods for solving partial differential equations, neural networks, and Machine Learning. This contribution explains why and how kernels are applied in these disciplines. It uncovers the links between them, as far as they are related to kernel techniques. It addresses non-expert readers and focuses on practical guidelines for using kernels in applications.
A Greedy Method for Solving Classes of PDE Problems
arXiv: Numerical Analysis, 2019
Motivated by the successful use of greedy algorithms for Reduced Basis Methods, a greedy method is proposed that selects N input data in an asymptotically optimal way to solve well-posed operator equations using these N data. The operator equations are defined as infinitely many equations given via a compact set of functionals in the dual of an underlying Hilbert space, and then the greedy algorithm, defined directly in the dual Hilbert space, selects N functionals step by step. When N functionals are selected, the operator equation is numerically solved by projection onto the span of the Riesz representers of the functionals. Orthonormalizing these yields useful Reduced Basis functions. By recent results on greedy methods in Hilbert spaces, the convergence rate is asymptotically given by Kolmogoroff N-widths and therefore optimal in that sense. However, these N-widths seem to be unknown in PDE applications. Numerical experiments show that for solving elliptic second-order Dirichlet...
Recovery of functions from weak data using unsymmetric meshless kernel-based methods
Applied Numerical Mathematics, 2008
Recent engineering applications successfully introduced unsymmetric meshless local Petrov-Galerkin (MLPG) schemes. As a step towards their mathematical analysis, this paper investigates nonstationary unsymmetric Petrov-Galerkin-type meshless kernel-based methods for the recovery of L2 functions from finitely many weak data. The results cover solvability conditions and error bounds in negative Sobolev norms with optimal rates. These rates are mainly determined by the approximation properties of the trial space, while choosing sufficiently many test functions ensures stability. Numerical examples are provided, supporting the theoretical results and leading to new questions for future research.
Filters, reproducing kernel, and adaptive meshfree method
Computational Mechanics
Reproducing kernel, with its intrinsic feature of moving averaging, can be utilized as a low-pass filter with scale decomposition capability. The discrete convolution of two nth order reproducing kernels with arbitrary support size in each kernel results in a filtered reproducing kernel function that has the same reproducing order. This property is utilized to separate the numerical solution into an unfiltered lower order portion and a filtered higher order portion. As such, the corresponding high-pass filter of this reproducing kernel filter can be used to identify the locations of high gradient, and consequently serves as an operator for error indication in meshfree analysis. In conjunction with the naturally conforming property of the reproducing kernel approximation, a meshfree adaptivity method is also proposed.
A computational tool for comparing all linear PDE solvers -- Optimal methods are meshless
2013
The paper starts out with a computational technique that allows to compare all linear methods for PDE solving that use the same input data. This is done by writing them as linear recovery formulas for solution values as linear combinations of the input data. Calculating the norm of these reproduction formulas on a fixed Sobolev space will then serve as a quality criterion that allows a fair comparison of all linear methods with the same inputs, including finite-element, finite-difference and meshless local Petrov-Galerkin techniques. A number of illustrative examples will be provided. As a byproduct, it turns out that a unique error--optimal method exists. It necessarily outperforms any other competing technique using the same data, e.g. those just mentioned, and it is necessarily meshless, if solutions are written "entirely in terms of nodes" (Belytschko et. al. 1996). On closer inspection, it turns out that it coincides with {\em symmetric meshless collocation} carried out with the kernel of the Hilbert space used for error evaluation, e.g. with the kernel of the Sobolev space used. This technique is around since at least 1998, but its optimality properties went unnoticed, so far.
Direct discretizations with applications to meshless methods for PDEs
2013
A central problem of numerical analysis is the approximate evaluation of integrals or derivatives of functions. In more generality, this is the approximate evaluation of a linear functional defined on a space of functions. Users often just have values of a function u at scattered points x1;:::; xN in the domain W of u, and then the value l(u) of a linear functional l must be approximated via direct approximation formulae l(u) N Â j=1 a ju(x j); i.e. we approximate l by point evaluation functionals dx j : u7! u(x j). Such direct discretizations include classical cases like Newton‐Cotes integration formulas or divided differences as approximations of derivatives. They are central for many methods solving partial differential equations, and their error analysis has a long‐ standing history going back to Peano and his kernel theorem. They also have a strong connection to Approximation Theory. Here, we apply certain optimizations to certain classes of such discretizations, and we evaluat...
An Adaptive–Hybrid Meshfree Approximation Method
SUMMARY It is now commonly agreed that the global radial basis functions method is an attractive approach for approximating smooth functions. This superiority does not come free; one must find ways to circumvent the associated problem of ill-conditioning and the high computational cost for solving dense matrix systems. We previously proposed different variants of adaptive methods for selecting proper trial subspaces so that the instability caused by inappropriate shape parameters were minimized. In contrast, the compactly supported radial basis functions are more relaxing on the smoothness requirements. By settling with algebraic order of convergence only, compactly supported radial basis functions method, provided the support radius are properly chosen, can approximate functions with less smoothness. The reality is that end-users must know the functions to be approximated a priori in order to decide which method to be used; this is not practical if one is solving a time evolving partial differential equation. The solution could be smooth at the beginning but the formation of shocks may come later in time. In this paper, we propose a hybrid algorithm making use of both global and compactly supported radial basis functions with other developed techniques for meshfree approximation with minimal fine tuning. The first contribution here is an adaptive node refinement scheme. Secondly, we apply the global radial basis functions (with adaptive subspace selection) on the adaptively generated data sites and lastly, the compactly supported radial basis functions (with adaptive support selection) that can be used as a blackbox algorithm for robust approximation to a wider class of functions and for solving partial differential equations.
A gradient reproducing kernel collocation method for boundary value problems
2013
The earlier work in the development of direct strong form collocation methods, such as the reproducing kernel collocation method (RKCM), addressed the domain integration issue in the Galerkin type meshfree method, such as the reproducing kernel particle method, but with increased computational complexity because of taking higher order derivatives of the approximation functions and the need for using a large number of collocation points for optimal convergence. In this work, we intend to address the computational complexity in RKCM while achieving optimal convergence by introducing a gradient reproduction kernel approximation. The proposed gradient RKCM reduces the order of differentiation to the first order for solving second-order PDEs with strong form collocation. We also show that, different from the typical strong form collocation method where a significantly large number of collocation points than the number of source points is needed for optimal convergence, the same number of collocation points and source points can be used in gradient RKCM. We also show that the same order of convergence rates in the primary unknown and its first-order derivative is achieved, owing to the imposition of gradient reproducing conditions. The numerical examples are given to verify the analytical prediction.