Eleftheria Malihoutsaki | University of Patras (original) (raw)
Uploads
Papers by Eleftheria Malihoutsaki
In this paper, we deal with the problem of solving systems of nonlinear equations. Most of the ex... more In this paper, we deal with the problem of solving systems of nonlinear equations. Most of the existed methods facing this problem require precise function and derivative values. So, it is useful to develop methods to work well when this information is not available or it is computationally expensive. To contribute in this area, we introduce a new strategy based on the replacement of function values in Newton’s method by approximated quantities making Newton’s method directly free of function evaluations and thus, ideal for imprecise function problems. From another point of view, it may be faced as an application of Newton’s method on a new approximated system, equivalent to the original one. This approach is based on proper selected points, named pivot points, which are low-cost as they are extracted via a componentwise sign-function-based technique. The quadratic convergence of new algorithm is proven and the results are promising.
In this paper, we deal with the problem of solving systems of nonlinear equations. Most of the ex... more In this paper, we deal with the problem of solving systems of nonlinear equations. Most of the existed methods facing this
problem require precise function and derivative values. So, it is useful to develop methods to work well when this information is not available or it is computationally expensive. To contribute in this area, we introduce a new strategy based on the replacement of function values in Newton’s method by approximated quantities making Newton’s method directly free of function evaluations and thus, ideal for imprecise function problems. From another point of view, it may be faced as an application of Newton’s method on a new approximated system, equivalent to the original one. This approach is based on proper selected points, named pivot points, which are low-cost as they are extracted via a componentwise sign-function-based technique. The quadratic convergence of new algorithm is proven and the results are promising.
Several methods have been proposed to solve systems of nonlinear equations. Among them, Newton's ... more Several methods have been proposed to solve systems of nonlinear equations. Among them, Newton's method holds a prominent position. Recently, we have proposed a Newton's method to manage problems with inaccurate function values or problems with high computational cost. Due to the existence of many such problems in real-life applications and the promising results of the above method, we remain in this goal and introduce a new improved version of it. For this reason, we alter the above method such that, on one hand, to accelerate it and moreover to reduce its computational cost, by requiring even less information per iteration and, on the other hand, holding its important advantages. These are its quadratic convergence, its good behavior in singular and illconditioned cases of Jacobian matrix and, of course, its capability to be ideal for imprecise function problems. The efficiency of the new method is demonstrated by numerical applications.
In unconstrained optimization iterative methods are used in order to locate optimal points. By co... more In unconstrained optimization iterative methods are used in order to locate optimal points. By considering each iteration of the optimization process as a change of time, the sequence of iterative points may be considered as a time series. In finance, there are many techniques for forecasting future values by
using historical data. In this paper, a novel hybrid method is introduced combining properly the above techniques. This hybrid approach may improve the performance of an optimization technique, especially in cases of slow convergence. Preliminary results are discussed.
Nonlinear problems are of interest to engineers, physicists and mathematicians because most physi... more Nonlinear problems are of interest to engineers, physicists and mathematicians because most physical systems are inherently nonlinear in nature. There is a class of methods for the numerical solution of a system of nonlinear equations which arise from iterative procedures. A feature of these repetitive
processes is that they cannot use information from the path traced out from some/all previous points that finally leads to the solution of the system. Moreover, this sequence of points -generated by an iterative process- depends crucially on the nature of the involved nonlinear equations and the used iterative method.
Time series is a sequence of data points, measured typically at successive times spaced at uniform
time intervals. Time series forecasting is the use of a model to forecast future events based on known past events; to predict data points before they are measured. Models for time series forecasting are the autoregressive (AR) models, the integrated (I) models, and the moving average (MA) models. These three classes depend linearly on previous data points. Combinations of these techniques produce autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) models.
Inspired by the idea of time series forecasting, in this paper, we treat the produced iterative points of any iterative process, at the last m steps, to be the known past events for the forecasting model. The proposed approach results to the next iteration of the iterative process, for every coordinate, through a hybrid way using a combination of the iterative process and the forecasting model of time series. Since the use of time series forecasting is an intermediate step in the iterative process it is necessary to take into account the complexity and the computational cost of this model. Thus, the simple ARMA models seem to be a good choice. Moreover, in order to avoid the recalculation of ARMA coefficients at each iteration, a recalculation of them is realized only when it is necessary.
Preliminary numerical examples on well-known test problems are promising.
The problem of solving systems of nonlinear equations is of great significance in the fields of c... more The problem of solving systems of nonlinear equations is of great significance in the fields of computational science and engineering. Newton’s method is the most well known method for solving such problems, with quadratic convergence to be ensured, whenever a sufficiently good initial approximation and a nonsingular Jacobian matrix are available. These situations may restrict its application and thus several interesting modifications of Newton’s method have been proposed to contribute in this active area of research.
In this work, we propose two modifications of Newton’s method mainly to cope with the above difficulties. The first proposed method may be considered as belonging to the class of quasi Newton methods, based on updating the Jacobian matrix through new defined points. The second method
is an improvement of the first one for managing problems with imprecise function values. For both methods, it is proved that they retain the important Newton’s advantage to converge quadratically. Under some assumptions, the enlargement of convergence area may be succeeded.
The first preliminary numerical results show the superiority of proposed methods in terms of Newton’s one, regarding the number of iterations, the total computational cost and the dependency of initial points. In case of singular Jacobian matrix, their efficiency is also tested successfully.
Journal of Computational and Applied Mathematics, 2009
Several methods have been proposed to solve systems of nonlinear equations. Among them, Newton's ... more Several methods have been proposed to solve systems of nonlinear equations. Among them, Newton's method holds a prominent position. Recently, we have proposed a Newton's method to manage problems with inaccurate function values or problems with high computational cost. Due to the existence of many such problems in real-life applications and the promising results of the above method, we remain in this goal and introduce a new improved version of it. For this reason, we alter the above method such that, on one hand, to accelerate it and moreover to reduce its computational cost, by requiring even less information per iteration and, on the other hand, holding its important advantages. These are its quadratic convergence, its good behavior in singular and illconditioned cases of Jacobian matrix and, of course, its capability to be ideal for imprecise function problems. The efficiency of the new method is demonstrated by numerical applications.
In this paper, we deal with the problem of solving systems of nonlinear equations. Most of the ex... more In this paper, we deal with the problem of solving systems of nonlinear equations. Most of the existed methods facing this problem require precise function and derivative values. So, it is useful to develop methods to work well when this information is not available or it is computationally expensive. To contribute in this area, we introduce a new strategy based on the replacement of function values in Newton’s method by approximated quantities making Newton’s method directly free of function evaluations and thus, ideal for imprecise function problems. From another point of view, it may be faced as an application of Newton’s method on a new approximated system, equivalent to the original one. This approach is based on proper selected points, named pivot points, which are low-cost as they are extracted via a componentwise sign-function-based technique. The quadratic convergence of new algorithm is proven and the results are promising.
In this paper, we deal with the problem of solving systems of nonlinear equations. Most of the ex... more In this paper, we deal with the problem of solving systems of nonlinear equations. Most of the existed methods facing this
problem require precise function and derivative values. So, it is useful to develop methods to work well when this information is not available or it is computationally expensive. To contribute in this area, we introduce a new strategy based on the replacement of function values in Newton’s method by approximated quantities making Newton’s method directly free of function evaluations and thus, ideal for imprecise function problems. From another point of view, it may be faced as an application of Newton’s method on a new approximated system, equivalent to the original one. This approach is based on proper selected points, named pivot points, which are low-cost as they are extracted via a componentwise sign-function-based technique. The quadratic convergence of new algorithm is proven and the results are promising.
Several methods have been proposed to solve systems of nonlinear equations. Among them, Newton's ... more Several methods have been proposed to solve systems of nonlinear equations. Among them, Newton's method holds a prominent position. Recently, we have proposed a Newton's method to manage problems with inaccurate function values or problems with high computational cost. Due to the existence of many such problems in real-life applications and the promising results of the above method, we remain in this goal and introduce a new improved version of it. For this reason, we alter the above method such that, on one hand, to accelerate it and moreover to reduce its computational cost, by requiring even less information per iteration and, on the other hand, holding its important advantages. These are its quadratic convergence, its good behavior in singular and illconditioned cases of Jacobian matrix and, of course, its capability to be ideal for imprecise function problems. The efficiency of the new method is demonstrated by numerical applications.
In unconstrained optimization iterative methods are used in order to locate optimal points. By co... more In unconstrained optimization iterative methods are used in order to locate optimal points. By considering each iteration of the optimization process as a change of time, the sequence of iterative points may be considered as a time series. In finance, there are many techniques for forecasting future values by
using historical data. In this paper, a novel hybrid method is introduced combining properly the above techniques. This hybrid approach may improve the performance of an optimization technique, especially in cases of slow convergence. Preliminary results are discussed.
Nonlinear problems are of interest to engineers, physicists and mathematicians because most physi... more Nonlinear problems are of interest to engineers, physicists and mathematicians because most physical systems are inherently nonlinear in nature. There is a class of methods for the numerical solution of a system of nonlinear equations which arise from iterative procedures. A feature of these repetitive
processes is that they cannot use information from the path traced out from some/all previous points that finally leads to the solution of the system. Moreover, this sequence of points -generated by an iterative process- depends crucially on the nature of the involved nonlinear equations and the used iterative method.
Time series is a sequence of data points, measured typically at successive times spaced at uniform
time intervals. Time series forecasting is the use of a model to forecast future events based on known past events; to predict data points before they are measured. Models for time series forecasting are the autoregressive (AR) models, the integrated (I) models, and the moving average (MA) models. These three classes depend linearly on previous data points. Combinations of these techniques produce autoregressive moving average (ARMA) and autoregressive integrated moving average (ARIMA) models.
Inspired by the idea of time series forecasting, in this paper, we treat the produced iterative points of any iterative process, at the last m steps, to be the known past events for the forecasting model. The proposed approach results to the next iteration of the iterative process, for every coordinate, through a hybrid way using a combination of the iterative process and the forecasting model of time series. Since the use of time series forecasting is an intermediate step in the iterative process it is necessary to take into account the complexity and the computational cost of this model. Thus, the simple ARMA models seem to be a good choice. Moreover, in order to avoid the recalculation of ARMA coefficients at each iteration, a recalculation of them is realized only when it is necessary.
Preliminary numerical examples on well-known test problems are promising.
The problem of solving systems of nonlinear equations is of great significance in the fields of c... more The problem of solving systems of nonlinear equations is of great significance in the fields of computational science and engineering. Newton’s method is the most well known method for solving such problems, with quadratic convergence to be ensured, whenever a sufficiently good initial approximation and a nonsingular Jacobian matrix are available. These situations may restrict its application and thus several interesting modifications of Newton’s method have been proposed to contribute in this active area of research.
In this work, we propose two modifications of Newton’s method mainly to cope with the above difficulties. The first proposed method may be considered as belonging to the class of quasi Newton methods, based on updating the Jacobian matrix through new defined points. The second method
is an improvement of the first one for managing problems with imprecise function values. For both methods, it is proved that they retain the important Newton’s advantage to converge quadratically. Under some assumptions, the enlargement of convergence area may be succeeded.
The first preliminary numerical results show the superiority of proposed methods in terms of Newton’s one, regarding the number of iterations, the total computational cost and the dependency of initial points. In case of singular Jacobian matrix, their efficiency is also tested successfully.
Journal of Computational and Applied Mathematics, 2009
Several methods have been proposed to solve systems of nonlinear equations. Among them, Newton's ... more Several methods have been proposed to solve systems of nonlinear equations. Among them, Newton's method holds a prominent position. Recently, we have proposed a Newton's method to manage problems with inaccurate function values or problems with high computational cost. Due to the existence of many such problems in real-life applications and the promising results of the above method, we remain in this goal and introduce a new improved version of it. For this reason, we alter the above method such that, on one hand, to accelerate it and moreover to reduce its computational cost, by requiring even less information per iteration and, on the other hand, holding its important advantages. These are its quadratic convergence, its good behavior in singular and illconditioned cases of Jacobian matrix and, of course, its capability to be ideal for imprecise function problems. The efficiency of the new method is demonstrated by numerical applications.