A trust region method for implicit orthogonal distance regression (original) (raw)

Trust region algorithms for the nonlinear least distance problem

Numerical Algorithms, 1995

The nonlinear least distance problem is a special case of equality constrained optimization. Let a curve or surface be given in implicit form via the equation f(x) = 0, x E R d, and let z E ~d be a fixed data point. We discuss two algorithms for solving the following problem: Find a point x* such that f(x*) = 0 and IIz -x*ll 2 is minimal among all such x. The algorithms presented use the trust region approach in which, at each iteration, an approximation to the objective function or merit function is minimized in a given neighborhood (the trust region) of the current iterate. Among other things, this allows one to prove global convergence of the algorithm.

Efficient trust region method for nonlinear least squares

Kybernetika, 1996

Institute of Mathematics of the Academy of Sciences of the Czech Republic provides access to digitized documents strictly for personal use. Each copy of any part of this document must contain these Terms of use.

Combined trust region methods for nonlinear least squares

Kybernetika, 1996

Institute of Mathematics of the Academy of Sciences of the Czech Republic provides access to digitized documents strictly for personal use. Each copy of any part of this document must contain these Terms of use.

Fitting curves and surfaces with constrained implicit polynomials

IEEE Transactions on Pattern Analysis and Machine Intelligence, 1999

Abstract - A problem which often arises while fitting implicit polynomials to 2D and 3D data sets is the following: Although the data set is simple, the fit exhibits undesired phenomena, such as loops, holes, extraneous components, etc problems by optimizing heuristic cost functions, which penalize some of these topological problems in the fit

l1 and l∞ ODR Fitting of Geometric Elements

msl.uni-bonn.de

We consider the fitting of geometric elements, such as lines, planes, circles, cones, and cylinders, in such a way that the sum of distances or the maximal distance from the element to the data points is minimized. We refer to this kind of distance based fitting as orthogonal distance regression or ODR. We present a separation of variables algorithm for l1 and l∞ ODR fitting of geometric elements. The algorithm is iterative and allows the element to be given in either implicit form f (x, β) = 0 or in parametric form x = g(t, β), where β is the vector of shape parameters, x is a 2-or 3-vector, and s is a vector of location parameters. The algorithm may even be applied in cases, such as with ellipses, in which a closed form expression for the distance is either not available or is difficult to compute. For l1 and l∞ fitting, the norm of the gradient is not available as a stopping criterion, as it is not continuous. We present a stopping criterion that handles both the l1 and the l∞ case, and is based on a suitable characterization of the stationary points.

The Levenberg-Marquardt method for nonlinear least squares curve-fitting problems

The Levenberg-Marquardt method is a standard technique used to solve nonlinear least squares problems. Least squares problems arise when fitting a parameterized function to a set of measured data points by minimizing the sum of the squares of the errors between the data points and the function. Nonlinear least squares problems arise when the function is not linear in the parameters. Nonlinear least squares methods involve an iterative improvement to parameter values in order to reduce the sum of the squares of the errors between the function and the measured data points. The Levenberg-Marquardt curve-fitting method is actually a combination of two minimization methods: the gradient descent method and the Gauss-Newton method. In the gradient descent method, the sum of the squared errors is reduced by updating the parameters in the direction of the greatest reduction of the least squares objective. In the Gauss-Newton method, the sum of the squared errors is reduced by assuming the least squares function is locally quadratic, and finding the minimum of the quadratic. The Levenberg-Marquardt method acts more like a gradient-descent method when the parameters are far from their optimal value, and acts more like the Gauss-Newton method when the parameters are close to their optimal value. This document describes these methods and illustrates the use of software to solve nonlinear least squares curve-fitting problems.

Estimation of planar curves, surfaces, and nonplanar space curves defined by implicit equations with applications to edge and range image segmentation

IEEE Transactions on Pattern Analysis and Machine Intelligence, 1991

This paper addresses the problem of parametric representation and estimation of complex planar curves in 2-D, surfaces in 3-D and nonplanar space curves in 3-D. Curves and surfaces can be defined either parametrically or implicitly, and we use the latter representation. A planar curve is the set of zeros of a smooth function of two variables X-Y, a surface is the set of zeros of a smooth function of three variables X-~-Z, and a space curve is the intersection of two surfaces, which are the set of zeros of two linearly independent smooth functions of three variables X-!/-Z. For example, the surface of a complex object in 3-D can be represented as a subset of a single implicit surface, with similar results for planar and space curves. We show how this unified representation can be used for object recognition, object position estimation, and segmentation of objects into meaningful subobjects, that is, the detection of "interest regions" that are more complex than high curvature regions and, hence, more useful as features for object recognition. Fitting implicit curves and surfaces to data would be ideally based on minimizing the mean square distance from the data points to the curve or surface. Since the distance from a point to a curve or surface cannot be computed exactly by direct methods, the approximate distance, which is a first-order approximation of the real distance, is introduced, generalizing and unifying previous results. We fit implicit curves and surfaces to data minimizing the approximate mean square distance, which is a nonlinear least squares problem. We show that in certain cases, this problem reduces to the generalized eigenvector fit, which is the minimization of the sum of squares of the values of the functions that define the curves or surfaces under a quadratic constraint function of the data. This fit is computationally reasonable to compute, is readily parallelizable, and, hence, is easily computed in real time. In general, the generalized eigenvector lb provides a very good initial estimate for the iterative minimization of the approximate mean square distance. Although we are primarily interested in the 2-D and 3-D cases, the methods developed herein are dimension independent. We show that in the case of algebraic curves and surfaces, i.e., those defined by sets of zeros of polynomials, the minimizers of the approximate mean square distance and the generalized eigenvector fit are invariant with respect to similarity transformations. Thus, the generalized eigenvector lit is independent of the choice of coordinate system, which is a very desirable property for object recognition, position estimation, and the stereo matching problem. Finally, as applications of the previous techniques, we illustrate the concept of "interest regions"

l1 and l1 ODR Fitting of Geometric Elements

We consider the fitting of geometric elements, such as lines, planes, circles, cones, and cylinders, in such a way that the sum of distances or the maximal distance from the element to the data points is minimized. We refer to this kind of distance based fitting as orthogonal distance regression or ODR. We present a separation of variables algorithm for l1 and l∞ ODR fitting of geometric elements. The algorithm is iterative and allows the element to be given in either implicit form f(x, β) = 0 or in parametric form x = g(t, β), where β is the vector of shape parameters, x is a 2- or 3-vector, and s is a vector of location parameters. The algorithm may even be applied in cases, such as with ellipses, in which a closed form expression for the distance is either not available or is difficult to compute. For l1 and l∞ fitting, the norm of the gradient is not available as a stopping criterion, as it is not continuous. We present a stopping criterion that handles both the l1 and the l∞ ca...

Stable fitting of 2D curves and 3D surfaces by implicit polynomials

IEEE Transactions on Pattern Analysis and Machine Intelligence, 2004

This work deals with fitting 2D and 3D implicit polynomials (IPs) to 2D curves and 3D surfaces, respectively. The zero-set of the polynomial is determined by the IP coefficients and describes the data. The polynomial fitting algorithms proposed in this paper aim at reducing the sensitivity of the polynomial to coefficient errors. Errors in coefficient values may be the result of numerical calculations, when solving the fitting problem or due to coefficient quantization. It is demonstrated that the effect of reducing this sensitivity also improves the fitting tightness and stability of the proposed two algorithms in fitting noisy data, as compared to existing algorithms like the well-known 3L and gradient-one algorithms. The development of the proposed algorithms is based on an analysis of the sensitivity of the zero-set to small coefficient changes and on minimizing a bound on the maximal error for one algorithm and minimizing the error variance for the second. Simulation results show that the proposed algorithms provide a significant reduction in fitting errors, particularly when fitting noisy data of complex shapes with high order polynomials, as compared to the performance obtained by the abovementioned existing algorithms.