On Preconditioners and Splitting in the Interval Gauss – Seidel Method (original) (raw)
Related papers
Preconditioners for the Interval Gauss-Seidel Method
1990
Interval Newton methods in conjunction with generalized bisection can form the basis of algorithms that nd all real roots within a speci ed box X R n of a system of nonlinear equations F (X) = 0 with mathematical certainty, even in nite-precision arithmetic. In such methods, the system F (X) = 0 is transformed into a linear interval system 0 = F (M) + F 0 (X)(X ? M); if interval arithmetic is then used to bound the solutions of this system, the resulting boxX contains all roots of the nonlinear system. We may use the interval Gauss{Seidel method to nd these solution bounds. In order to increase the overall e ciency of the interval Newton / generalized bisection algorithm, the linear interval system is multiplied by a preconditioner matrix Y before the interval Gauss{Seidel method is applied. Here, we review results we have obtained over the past few years concerning computation of such preconditioners. We emphasize importance and connecting relationships, and we cite references for the underlying elementary theory and other details.
A Review of Preconditioners for the Interval Gauss-Seidel Method
1991
Interval Newton methods in conjunction with generalized bi- section can form the basis of algorithms that Þnd all real roots within a speciÞed box X öRn of a system of nonlinear equations F(X) = 0 with mathematical certainty, even in Þnite-precision arithmetic. In such methods, the system F(X) = 0 is transformed into a linear interval system 0 = F(M)
International Journal of Physical Sciences, 2011
We discuss Hansen-Sengupta operator in the context of circular interval arithmetic for the algebraic inclusion of zeros of interval nonlinear systems of equations. It was demonstrated by showing the effects of applying repeatedly preconditioners of inverses of the midpoint interval matrices on the well known Trapezoidal Newton method at each iteration cycle wherein, the work of Shokri (2008) was our major tool of investigation. It was shown that the Trapezoidal interval Newton method with inverse midpoint interval matrix as preconditioner is not a H-continuous map and that Baire category failed to hold in the sense of Aguelov et al. (2007). This was more so since it produced from our numerical example, not only overestimated results but, also results that are not finitely bounded which we compare with results computed previously given in Uwamusi.
Interval linear constraint solving using the preconditioned interval gauss-seidel method
1995
Abstract We propose the use of the preconditioned interval Gauss-Seidel method as the backbone of an efficient linear equality solver in a CLP (Interval) language. The method, as originally designed, works only on linear systems with square coefficient matrices. Even imposing such a restriction, a naive incorporation of the traditional preconditioning algorithm in a CLP language incurs a high worst-case time complexity of O (n4), where n is the number of variables in the linear system.
A general iterative sparse linear solver and its parallelization for interval Newton methods
Reliable Computing, 1995
Interval Newton/Generalized Bisection methods reliably find all numerical solutions within a given domain. Both computational complexity analysis and numerical experiments have shown that solving the corresponding interval linear system generated by interval Newton's methods can be computationally expensive (especially when the nonlinear system is large).
Computing, 2008
Finding bounding sets to solutions to systems of algebraic equations with uncertainties in the coefficients, as well as rapidly but rigorously locating all solutions to nonlinear systems or global optimization problems, involves bounding the solution sets to systems of equations with wide interval coefficients. In many cases, singular systems are admitted within the intervals of uncertainty of the coefficients, leading to unbounded solution sets with more than one disconnected component. This, combined with the fact that computing exact bounds on the solution set is NP-hard, limits the range of techniques available for bounding the solution sets for such systems. However, the componentwise nature and other properties make the interval Gauss-Seidel method suited to computing meaningful bounds in a predictable amount of computing time. For this reason, we focus on the interval Gauss-Seidel method. In particular, we study and compare various preconditioning techniques we have developed over the years but not fully investigated, comparing the results. Based on a study of the preconditioners in detail on some simple, specially-designed small systems, we propose two heuristic algorithms, then study the behavior of the preconditioners on some larger, randomly generated systems, as well as a small selection of systems from the Matrix Market collection.
On the Selection of a Transversal to Solve Nonlinear Systems with Interval Arithmetic
Lecture Notes in Computer Science, 2006
This paper investigates the impact of the selection of a transversal on the speed of convergence of interval methods based on the nonlinear Gauss-Seidel scheme to solve nonlinear systems of equations. It is shown that, in a marked contrast with the linear case, such a selection does not speed up the computation in the general case; directions for researches on more flexible methods to select projections are then discussed.
Applied Mathematics and Computation, 1983
We introduce an interval Newton method for bounding solutions of systems of nonlinear equations. It entails. three subalgorithms.
Approximation on disjoint intervals and its applicability to matrix preconditioning
Complex Variables and Elliptic Equations, 2007
A polynomial preconditioner to an invertible matrix is constructed from the (near) best uniform polynomial approximation to the function 1/x on the eigenvalues of the matrix. The preconditioner is developed using symmetric matrices whose eigenvalues are located in the union of two (disjoint) intervals. The full spectrum of the matrix need not be known in order for the strategy to be applicable. All that is essential is an estimate on the bounds of the larger of the two clusters of eigenvalues. The algorithm, based on the strategy, is shown to be numerically stable with respect to the size of the matrix. In fact it yields, at low cost, an approximation to the inverse of the matrix to within a specified tolerance.