Marcos Raydan | FCT / Universidade Nova de Lisboa (original) (raw)
Papers by Marcos Raydan
Optimization Methods and Software
Journal of Computational and Applied Mathematics
arXiv: Optimization and Control, 2020
Direct MultiSearch (DMS) is a robust and efficient derivative-free optimization algorithm, able t... more Direct MultiSearch (DMS) is a robust and efficient derivative-free optimization algorithm, able to generate approximations to the complete Pareto front of a given multiobjective optimization (MOO) problem. When first (or higher) order derivatives of the different components of the objective function are available, typical approaches for MOO problems are based on generating a single sequence of iterates that converges to a point with corresponding image lying on the Pareto front (one at a time). The purpose of this work is to asses the potential enrichment of adding first-order information, when derivatives are available, to the DMS framework. For that, we describe and analyze several different combined techniques that maintain the search/poll paradigm of DMS, while adding in a suitable way gradient information to the poll step. To properly evaluate the new proposed schemes, we provide numerical results for a set of benchmark MOO problems, in the form of performance profiles, where c...
Optimization Methods and Software
Derivatives are an important tool for single-objective optimization. In fact, it is commonly acce... more Derivatives are an important tool for single-objective optimization. In fact, it is commonly accepted that derivative-based methods present a better performance than derivative-free optimization approaches. In this work, we will show that the same does not always apply to multiobjective derivative-based optimization, when the goal is to compute an approximation to the complete Pareto front of a given problem. The competitiveness of Direct MultiSearch (DMS), a robust and efficient derivative-free optimization algorithm, will be stated for derivative-based multiobjective optimization problems, by comparison with MOSQP, a state-of-art derivative-based multiobjective optimization solver. We will then assess the potential enrichment of adding first-order information to the DMS framework. Derivatives will be used to prune the positive spanning sets considered at the poll step of the algorithm. The role of ascent directions, that conform to the geometry of the nearby feasible region, will then be highlighted.
We present a derivative-free separable quadratic modeling and cubic regularization technique for ... more We present a derivative-free separable quadratic modeling and cubic regularization technique for solving smooth unconstrained minimization problems. The derivative-free approach is mainly concerned with building a quadratic model that could be generated by numerical interpolation or using a minimum Frobenious norm approach, when the number of points available does not allow to build a complete quadratic model. This model plays a key role to generate an approximated gradient vector and Hessian matrix of the objective function at every iteration. We add a specialized cubic regularization strategy to minimize the quadratic model at each iteration, that makes use of separability. We discuss convergence results, including worst case complexity, of the proposed schemes to first-order stationary points. Some preliminary numerical results are presented to illustrate the robustness of the specialized separable cubic algorithm. AMS Subject Classification: 90C30, 65K05, 90C56, 65D05.
Computational & Applied Mathematics, 2021
In this paper, we establish a relationship between preconditioning strategies for solving the lin... more In this paper, we establish a relationship between preconditioning strategies for solving the linear systems behind a fitting surface problem using the Powell–Sabin finite element, and choosing appropriate basis of the spline vector space to which the fitting surface belongs. We study the problem of determining whether good (or effective) preconditioners lead to good basis and vice versa. A preconditioner is considered to be good if it either reduces the condition number of the preconditioned matrix or clusters its eigenvalues, and a basis is considered to be good if it has local support and constitutes a partition of unity. We present some illustrative numerical results which indicate that the basis associated to well-known good preconditioners do not have in general the expected good properties. Similarly, the preconditioners obtained from well-known good basis do not exhibit the expected good numerical properties. Nevertheless, taking advantage of the established relationship, we...
Computers & Mathematics with Applications, 2016
We present inverse-free recursive multiresolution algorithms for data approximation problems base... more We present inverse-free recursive multiresolution algorithms for data approximation problems based on energy functionals minimization. During the multiresolution process a linear system needs to be solved at each different resolution level, which can be solved with direct or iterative methods. Numerical results are reported, using the sparse Cholesky factorization, for two applications: one concerning the localization of regions in which the energy of a given surface is mostly concentrated, and another one regarding noise reduction of a given dataset. In addition, for large-scale data approximation problems that require a very fine resolution, we discuss the use of the Preconditioned Conjugate Gradient (PCG) iterative method coupled with a specialized monolithic preconditioner, for which one preconditioner is built for the highest resolution level and then the corresponding blocks of that preconditioner are used as preconditioners for the forthcoming lower levels.
Computational Optimization and Applications, 2019
We present a new algorithm for solving large-scale unconstrained optimization problems that uses ... more We present a new algorithm for solving large-scale unconstrained optimization problems that uses cubic models, matrix-free subspace minimization, and secant-type parameters for defining the cubic terms. We also propose and analyze a specialized trust-region strategy to minimize the cubic model on a properly chosen low-dimensional subspace, which is built at each iteration using the Lanczos process. For the convergence analysis we present, as a general framework, a model trust-region subspace algorithm with variable metric and we establish asymptotic as well as complexity convergence results. Preliminary numerical results, on some test functions and also on the well-known disk packing problem, are presented to illustrate the performance of the proposed scheme when solving large-scale problems.
Computational Optimization and Applications, 2020
The delayed weighted gradient method, recently introduced in [13], is a low-cost gradient-type me... more The delayed weighted gradient method, recently introduced in [13], is a low-cost gradient-type method that exhibits a surprisingly and perhaps unexpected fast convergence behavior that competes favorably with the well-known conjugate gradient method for the minimization of convex quadratic functions. In this work, we establish several orthogonality properties that add understanding to the practical behavior of the method, including its finite termination. We show that if the n × n real Hessian matrix of the quadratic function has only p < n distinct eigenvalues, then the method terminates in p iterations. We also establish an optimality condition, concerning the gradient norm, that motivates the use of this novel scheme when low precision is required for the minimization of non-quadratic functions.
Journal of Mathematical Chemistry, 2017
The computation of the subspace spanned by the eigenvectors associated to the N lowest eigenvalue... more The computation of the subspace spanned by the eigenvectors associated to the N lowest eigenvalues of a large symmetric matrix (or, equivalently, the projection matrix onto that subspace) is a difficult numerical linear algebra problem when the dimensions involved are very large. These problems appear when one employs the self-consistent-field fixed-point algorithm or its variations for electronic structure calculations, which requires repeated solutions of the problem for different data, in an iterative context. The naive use of consolidated packages as Arpack does not lead to practical solutions in large-scale cases. In this paper we combine and enhance well-known purification iterative schemes with a specialized use of Arpack (or any other eigen-package) to address these large-scale challenging problems.
EURO Journal on Computational Optimization, 2017
To solve nonsmooth unconstrained minimization problems, we combine the spectral choice of step le... more To solve nonsmooth unconstrained minimization problems, we combine the spectral choice of step length with two well-established subdifferential-type schemes: the gradient sampling method and the simplex gradient method. We focus on the interesting case in which the objective function is continuously differentiable almost everywhere, and it is often not differentiable at minimizers. In the case of the gradient sampling method, we also present a simple differentiability test that allows us to use the exact gradient direction as frequently as possible, and to build a stochastic subdifferential direction only if the test fails. The proposed spectral gradient sampling method is combined with a monotone line search globalization strategy. On the other hand, the simplex gradient method is a direct search method that only requires func-Hugo Aponte: The work here presented is not affiliated or endorsed by Microsoft. Debora Cores: Partially supported by CESMA at USB. Marcos Raydan: Partially supported by CCCT Center at UCV.
Applied Mathematics and Computation, 2013
Although programming is a difficult and creative activity, useful strategies and heuristics exist... more Although programming is a difficult and creative activity, useful strategies and heuristics exist for solving programming problems. We analyse some of the most fundamental and productive among them; their knowledge and conscious application should help the programmers in constructing programs, both by stimulating their thinking and by helping them to recognise classical situations. The precise framework for the. analysis is provided by the specification language Z. For editorial reasons the description in some sections of this paper has had to be curtailed.
SIAM Journal on Numerical Analysis, 1998
A generalization of the steepest descent and other methods for solving a large scale symmetric po... more A generalization of the steepest descent and other methods for solving a large scale symmetric positive definitive system Ax = b is presented. Given a positive integer m, the new iteration is given by x k+1 = x k − λ(x ν(k))(Ax k − b), where λ(x ν(k)) is the steepest descent step at a previous iteration ν(k) ∈ {k, k − 1,. .. , max{0, k − m}}. The global convergence to the solution of the problem is established under a more general framework, and numerical experiments are performed that suggest that some strategies for the choice of ν(k) give rise to efficient methods for obtaining approximate solutions of the system.
Bulletin of Computational Applied Mathematics, 2013
We address the issue of approximating the pseudoinverse of the coefficient matrix for dynamically... more We address the issue of approximating the pseudoinverse of the coefficient matrix for dynamically building preconditioning strategies for the numerical solution of large dense linear least-squares problems. The new preconditioning strategies are embedded into simple and well-known iterative schemes that avoid the use of the, usually ill-conditioned, normal equations. We analyze a scheme to approximate the pseudoinverse, based on Schulz iterative method, and also different iterative schemes, based on extensions of Richardson's method, and the conjugate gradient method, that are suitable for preconditioning strategies. We present preliminary numerical results to illustrate the advantages of the proposed schemes.
We focus on inverse preconditioners based on minimizing F(X) = 1-cos(XA,I), where XA is the preco... more We focus on inverse preconditioners based on minimizing F(X) = 1-cos(XA,I), where XA is the preconditioned matrix and A is symmetric and positive definite. We present and analyze gradient-type methods to minimize F(X) on a suitable compact set. For that we use the geometrical properties of the non-polyhedral cone of symmetric and positive definite matrices, and also the special properties of F(X) on the feasible set. Preliminary and encouraging numerical results are also presented in which dense and sparse approximations are included.
We extend the geometrical inverse approximation approach for solving linear least-squares problem... more We extend the geometrical inverse approximation approach for solving linear least-squares problems. For that we focus on the minimization of 1-cos(X(A^TA),I), where A is a given rectangular coefficient matrix and X is the approximate inverse. In particular, we adapt the recently published simplified gradient-type iterative scheme MinCos to the least-squares scenario. In addition, we combine the generated convergent sequence of matrices with well-known acceleration strategies based on recently developed matrix extrapolation methods, and also with some deterministic and heuristic acceleration schemes which are based on affecting, in a convenient way, the steplength at each iteration. A set of numerical experiments, including large-scale problems, are presented to illustrate the performance of the different accelerations strategies.
We discuss different variants of Newton’s method for computing the pth root of a given matrix. A ... more We discuss different variants of Newton’s method for computing the pth root of a given matrix. A suitable implementation is presented for solving the Sylvester equation, that appears at every Newton’s iteration, via Kronecker products. This approach is quadratically convergent and stable, but too expensive in computational cost. In contrast we propose and analyze some specialized versions that exploit the commutation of the iterates with the given matrix. These versions are relatively inexpensive but have either stability problems or stagnation problems when good precision is required. Hybrid versions are presented to take advantage of the best features in both approaches. Preliminary and encouraging numerical results are presented for p = 3 and p = 5.
We introduce a family of weighted conjugate-gradient-type methods, for strictly convex quadratic ... more We introduce a family of weighted conjugate-gradient-type methods, for strictly convex quadratic functions, whose parameters are determined by a minimization model based on a convex combination of the objective function and its gradient norm. This family includes the classical linear conjugate gradient method and the recently published delayed weighted gradient method as the extreme cases of the convex combination. The inner cases produce a merit function that offers a compromise between function-value reduction and stationarity which is convenient for real applications. We show that each one of the infinitely many members of the family exhibits q-linear convergence to the unique solution. Moreover, each one of them enjoys finite termination and an optimality property related to the combined merit function. In particular, we prove that if the n×n Hessian of the quadratic function has p < n different eigenvalues, then each member of the family obtains the unique global minimizer i...
RAIRO - Operations Research
Solving nonlinear programming problems usually involve difficulties to obtain a starting point th... more Solving nonlinear programming problems usually involve difficulties to obtain a starting point that produces convergence to a local feasible solution, for which the objective function value is sufficiently good. A novel approach is proposed, combining metaheuristic techniques with modern deterministic optimization schemes, with the aim to solve a sequence of penalized related problems to generate convenient starting points. The metaheuristic ideas are used to choose the penalty parameters associated with the constraints, and for each set of penalty parameters a deterministic scheme is used to evaluate a properly chosen metaheuristic merit function. Based on this starting-point approach, we describe two different strategies for solving the nonlinear programming problem. We illustrate the properties of the combined schemes on three nonlinear programming benchmark-test problems, and also on the well-known and hard-to-solve disk-packing problem, that possesses a huge amount of local-non...
Optimization Methods and Software
Journal of Computational and Applied Mathematics
arXiv: Optimization and Control, 2020
Direct MultiSearch (DMS) is a robust and efficient derivative-free optimization algorithm, able t... more Direct MultiSearch (DMS) is a robust and efficient derivative-free optimization algorithm, able to generate approximations to the complete Pareto front of a given multiobjective optimization (MOO) problem. When first (or higher) order derivatives of the different components of the objective function are available, typical approaches for MOO problems are based on generating a single sequence of iterates that converges to a point with corresponding image lying on the Pareto front (one at a time). The purpose of this work is to asses the potential enrichment of adding first-order information, when derivatives are available, to the DMS framework. For that, we describe and analyze several different combined techniques that maintain the search/poll paradigm of DMS, while adding in a suitable way gradient information to the poll step. To properly evaluate the new proposed schemes, we provide numerical results for a set of benchmark MOO problems, in the form of performance profiles, where c...
Optimization Methods and Software
Derivatives are an important tool for single-objective optimization. In fact, it is commonly acce... more Derivatives are an important tool for single-objective optimization. In fact, it is commonly accepted that derivative-based methods present a better performance than derivative-free optimization approaches. In this work, we will show that the same does not always apply to multiobjective derivative-based optimization, when the goal is to compute an approximation to the complete Pareto front of a given problem. The competitiveness of Direct MultiSearch (DMS), a robust and efficient derivative-free optimization algorithm, will be stated for derivative-based multiobjective optimization problems, by comparison with MOSQP, a state-of-art derivative-based multiobjective optimization solver. We will then assess the potential enrichment of adding first-order information to the DMS framework. Derivatives will be used to prune the positive spanning sets considered at the poll step of the algorithm. The role of ascent directions, that conform to the geometry of the nearby feasible region, will then be highlighted.
We present a derivative-free separable quadratic modeling and cubic regularization technique for ... more We present a derivative-free separable quadratic modeling and cubic regularization technique for solving smooth unconstrained minimization problems. The derivative-free approach is mainly concerned with building a quadratic model that could be generated by numerical interpolation or using a minimum Frobenious norm approach, when the number of points available does not allow to build a complete quadratic model. This model plays a key role to generate an approximated gradient vector and Hessian matrix of the objective function at every iteration. We add a specialized cubic regularization strategy to minimize the quadratic model at each iteration, that makes use of separability. We discuss convergence results, including worst case complexity, of the proposed schemes to first-order stationary points. Some preliminary numerical results are presented to illustrate the robustness of the specialized separable cubic algorithm. AMS Subject Classification: 90C30, 65K05, 90C56, 65D05.
Computational & Applied Mathematics, 2021
In this paper, we establish a relationship between preconditioning strategies for solving the lin... more In this paper, we establish a relationship between preconditioning strategies for solving the linear systems behind a fitting surface problem using the Powell–Sabin finite element, and choosing appropriate basis of the spline vector space to which the fitting surface belongs. We study the problem of determining whether good (or effective) preconditioners lead to good basis and vice versa. A preconditioner is considered to be good if it either reduces the condition number of the preconditioned matrix or clusters its eigenvalues, and a basis is considered to be good if it has local support and constitutes a partition of unity. We present some illustrative numerical results which indicate that the basis associated to well-known good preconditioners do not have in general the expected good properties. Similarly, the preconditioners obtained from well-known good basis do not exhibit the expected good numerical properties. Nevertheless, taking advantage of the established relationship, we...
Computers & Mathematics with Applications, 2016
We present inverse-free recursive multiresolution algorithms for data approximation problems base... more We present inverse-free recursive multiresolution algorithms for data approximation problems based on energy functionals minimization. During the multiresolution process a linear system needs to be solved at each different resolution level, which can be solved with direct or iterative methods. Numerical results are reported, using the sparse Cholesky factorization, for two applications: one concerning the localization of regions in which the energy of a given surface is mostly concentrated, and another one regarding noise reduction of a given dataset. In addition, for large-scale data approximation problems that require a very fine resolution, we discuss the use of the Preconditioned Conjugate Gradient (PCG) iterative method coupled with a specialized monolithic preconditioner, for which one preconditioner is built for the highest resolution level and then the corresponding blocks of that preconditioner are used as preconditioners for the forthcoming lower levels.
Computational Optimization and Applications, 2019
We present a new algorithm for solving large-scale unconstrained optimization problems that uses ... more We present a new algorithm for solving large-scale unconstrained optimization problems that uses cubic models, matrix-free subspace minimization, and secant-type parameters for defining the cubic terms. We also propose and analyze a specialized trust-region strategy to minimize the cubic model on a properly chosen low-dimensional subspace, which is built at each iteration using the Lanczos process. For the convergence analysis we present, as a general framework, a model trust-region subspace algorithm with variable metric and we establish asymptotic as well as complexity convergence results. Preliminary numerical results, on some test functions and also on the well-known disk packing problem, are presented to illustrate the performance of the proposed scheme when solving large-scale problems.
Computational Optimization and Applications, 2020
The delayed weighted gradient method, recently introduced in [13], is a low-cost gradient-type me... more The delayed weighted gradient method, recently introduced in [13], is a low-cost gradient-type method that exhibits a surprisingly and perhaps unexpected fast convergence behavior that competes favorably with the well-known conjugate gradient method for the minimization of convex quadratic functions. In this work, we establish several orthogonality properties that add understanding to the practical behavior of the method, including its finite termination. We show that if the n × n real Hessian matrix of the quadratic function has only p < n distinct eigenvalues, then the method terminates in p iterations. We also establish an optimality condition, concerning the gradient norm, that motivates the use of this novel scheme when low precision is required for the minimization of non-quadratic functions.
Journal of Mathematical Chemistry, 2017
The computation of the subspace spanned by the eigenvectors associated to the N lowest eigenvalue... more The computation of the subspace spanned by the eigenvectors associated to the N lowest eigenvalues of a large symmetric matrix (or, equivalently, the projection matrix onto that subspace) is a difficult numerical linear algebra problem when the dimensions involved are very large. These problems appear when one employs the self-consistent-field fixed-point algorithm or its variations for electronic structure calculations, which requires repeated solutions of the problem for different data, in an iterative context. The naive use of consolidated packages as Arpack does not lead to practical solutions in large-scale cases. In this paper we combine and enhance well-known purification iterative schemes with a specialized use of Arpack (or any other eigen-package) to address these large-scale challenging problems.
EURO Journal on Computational Optimization, 2017
To solve nonsmooth unconstrained minimization problems, we combine the spectral choice of step le... more To solve nonsmooth unconstrained minimization problems, we combine the spectral choice of step length with two well-established subdifferential-type schemes: the gradient sampling method and the simplex gradient method. We focus on the interesting case in which the objective function is continuously differentiable almost everywhere, and it is often not differentiable at minimizers. In the case of the gradient sampling method, we also present a simple differentiability test that allows us to use the exact gradient direction as frequently as possible, and to build a stochastic subdifferential direction only if the test fails. The proposed spectral gradient sampling method is combined with a monotone line search globalization strategy. On the other hand, the simplex gradient method is a direct search method that only requires func-Hugo Aponte: The work here presented is not affiliated or endorsed by Microsoft. Debora Cores: Partially supported by CESMA at USB. Marcos Raydan: Partially supported by CCCT Center at UCV.
Applied Mathematics and Computation, 2013
Although programming is a difficult and creative activity, useful strategies and heuristics exist... more Although programming is a difficult and creative activity, useful strategies and heuristics exist for solving programming problems. We analyse some of the most fundamental and productive among them; their knowledge and conscious application should help the programmers in constructing programs, both by stimulating their thinking and by helping them to recognise classical situations. The precise framework for the. analysis is provided by the specification language Z. For editorial reasons the description in some sections of this paper has had to be curtailed.
SIAM Journal on Numerical Analysis, 1998
A generalization of the steepest descent and other methods for solving a large scale symmetric po... more A generalization of the steepest descent and other methods for solving a large scale symmetric positive definitive system Ax = b is presented. Given a positive integer m, the new iteration is given by x k+1 = x k − λ(x ν(k))(Ax k − b), where λ(x ν(k)) is the steepest descent step at a previous iteration ν(k) ∈ {k, k − 1,. .. , max{0, k − m}}. The global convergence to the solution of the problem is established under a more general framework, and numerical experiments are performed that suggest that some strategies for the choice of ν(k) give rise to efficient methods for obtaining approximate solutions of the system.
Bulletin of Computational Applied Mathematics, 2013
We address the issue of approximating the pseudoinverse of the coefficient matrix for dynamically... more We address the issue of approximating the pseudoinverse of the coefficient matrix for dynamically building preconditioning strategies for the numerical solution of large dense linear least-squares problems. The new preconditioning strategies are embedded into simple and well-known iterative schemes that avoid the use of the, usually ill-conditioned, normal equations. We analyze a scheme to approximate the pseudoinverse, based on Schulz iterative method, and also different iterative schemes, based on extensions of Richardson's method, and the conjugate gradient method, that are suitable for preconditioning strategies. We present preliminary numerical results to illustrate the advantages of the proposed schemes.
We focus on inverse preconditioners based on minimizing F(X) = 1-cos(XA,I), where XA is the preco... more We focus on inverse preconditioners based on minimizing F(X) = 1-cos(XA,I), where XA is the preconditioned matrix and A is symmetric and positive definite. We present and analyze gradient-type methods to minimize F(X) on a suitable compact set. For that we use the geometrical properties of the non-polyhedral cone of symmetric and positive definite matrices, and also the special properties of F(X) on the feasible set. Preliminary and encouraging numerical results are also presented in which dense and sparse approximations are included.
We extend the geometrical inverse approximation approach for solving linear least-squares problem... more We extend the geometrical inverse approximation approach for solving linear least-squares problems. For that we focus on the minimization of 1-cos(X(A^TA),I), where A is a given rectangular coefficient matrix and X is the approximate inverse. In particular, we adapt the recently published simplified gradient-type iterative scheme MinCos to the least-squares scenario. In addition, we combine the generated convergent sequence of matrices with well-known acceleration strategies based on recently developed matrix extrapolation methods, and also with some deterministic and heuristic acceleration schemes which are based on affecting, in a convenient way, the steplength at each iteration. A set of numerical experiments, including large-scale problems, are presented to illustrate the performance of the different accelerations strategies.
We discuss different variants of Newton’s method for computing the pth root of a given matrix. A ... more We discuss different variants of Newton’s method for computing the pth root of a given matrix. A suitable implementation is presented for solving the Sylvester equation, that appears at every Newton’s iteration, via Kronecker products. This approach is quadratically convergent and stable, but too expensive in computational cost. In contrast we propose and analyze some specialized versions that exploit the commutation of the iterates with the given matrix. These versions are relatively inexpensive but have either stability problems or stagnation problems when good precision is required. Hybrid versions are presented to take advantage of the best features in both approaches. Preliminary and encouraging numerical results are presented for p = 3 and p = 5.
We introduce a family of weighted conjugate-gradient-type methods, for strictly convex quadratic ... more We introduce a family of weighted conjugate-gradient-type methods, for strictly convex quadratic functions, whose parameters are determined by a minimization model based on a convex combination of the objective function and its gradient norm. This family includes the classical linear conjugate gradient method and the recently published delayed weighted gradient method as the extreme cases of the convex combination. The inner cases produce a merit function that offers a compromise between function-value reduction and stationarity which is convenient for real applications. We show that each one of the infinitely many members of the family exhibits q-linear convergence to the unique solution. Moreover, each one of them enjoys finite termination and an optimality property related to the combined merit function. In particular, we prove that if the n×n Hessian of the quadratic function has p < n different eigenvalues, then each member of the family obtains the unique global minimizer i...
RAIRO - Operations Research
Solving nonlinear programming problems usually involve difficulties to obtain a starting point th... more Solving nonlinear programming problems usually involve difficulties to obtain a starting point that produces convergence to a local feasible solution, for which the objective function value is sufficiently good. A novel approach is proposed, combining metaheuristic techniques with modern deterministic optimization schemes, with the aim to solve a sequence of penalized related problems to generate convenient starting points. The metaheuristic ideas are used to choose the penalty parameters associated with the constraints, and for each set of penalty parameters a deterministic scheme is used to evaluate a properly chosen metaheuristic merit function. Based on this starting-point approach, we describe two different strategies for solving the nonlinear programming problem. We illustrate the properties of the combined schemes on three nonlinear programming benchmark-test problems, and also on the well-known and hard-to-solve disk-packing problem, that possesses a huge amount of local-non...
Análisis Numérico: teoría y práctica, Aug 2017
El análisis numérico es la disciplina científica que se encarga de proponer y analizar algoritmos... more El análisis numérico es la disciplina científica que se encarga de proponer y analizar algoritmos o métodos para resolver problemas de las matemáticas continuas, especialmente aquellos problemas que no se pueden resolver con fórmulas analíti-cas o cerradas. En este contexto, por problemas de las matemáticas continuas nos referimos a problemas en donde las variables son reales o complejas y por tanto pueden tomar un infinito no numerable de valores posibles. El análisis numérico es una disciplina antigua que hasta mediados del siglo XX se desarrollaba con los instrumentos manuales propios de cada época, y que no dependía de la presencia de computadores (ordenadores) modernos, sin embargo, actualmente su campo de acción ha aumentado significativamente al poder usar computadores digitales con capacidad para resolver problemas con un gran número de variables. Al tratar con números reales y complejos, en un computador, el inconveniente de no poder representarlos de forma exacta agrega un nuevo aspecto al análisis numérico: debe considerar en sus algoritmos, y en el análisis de los mismos, la dificultad de tener que aproximar los números que usa. En este libro estudiaremos una amplia variedad de algoritmos numéricos para resolver diferentes problemas de matemática continua, enfatizando en aquellos que surgen con frecuencia en las aplicaciones. En especial, consideraremos la solución de sistemas lineales por métodos directos o iterativos, la solución de ecuaciones no lineales en una o en muchas variables, el problema de interpolar un conjunto dado de datos, el ajuste de datos mediante la técnica de cuadrados mínimos lineales y no lineales, estimación de autovalores y autovectores, la solución de ecuaciones diferenciales, entre otros.