S. Lucidi | Università degli Studi "La Sapienza" di Roma (original) (raw)

Papers by S. Lucidi

Research paper thumbnail of A multi-objective DIRECT algorithm for ship hull optimization

Computational Optimization and Applications, 2017

The paper is concerned with black-box nonlinear constrained multi-objective optimization problems... more The paper is concerned with black-box nonlinear constrained multi-objective optimization problems. Our interest is the definition of a multi-objective deterministic partition-based algorithm. The main target of the proposed algorithm is the solution of a real ship hull optimization problem. To this purpose and in pursuit of an efficient method, we develop an hybrid algorithm by coupling a multi-objective DIRECT-type algorithm with an efficient derivative-free local algorithm. The results obtained on a set of "hard" nonlinear constrained multi-objective test problems show viability of the proposed approach. Results on a hull-form optimization of a high-speed catamaran (sailing in head waves in the North Pacific Ocean) are also presented. In order to consider a real ocean environment, stochastic sea state and speed are taken into account. The problem is formulated as a multi-objective optimization aimed at (i) the reduction of the expected value of the mean total resistance in irregular head waves, at variable speed and (ii) the increase of the ship operability, with respect to a set of motion-related constraints. We show that the hybrid method performs well also on this industrial problem.

Research paper thumbnail of A Linesearch-Based Derivative-Free Approach for Nonsmooth Constrained Optimization

SIAM Journal on Optimization, 2014

In this paper, we propose new linesearch-based methods for nonsmooth constrained optimization pro... more In this paper, we propose new linesearch-based methods for nonsmooth constrained optimization problems when first-order information on the problem functions is not available. In the first part, we describe a general framework for bound-constrained problems and analyze its convergence toward stationary points, using the Clarke-Jahn directional derivative. In the second part, we consider inequality constrained optimization problems where both objective function and constraints can possibly be nonsmooth. In this case, we first split the constraints into two subsets: difficult general nonlinear constraints and simple bound constraints on the variables. Then, we use an exact penalty function to tackle the difficult constraints and we prove that the original problem can be reformulated as the bound-constrained minimization of the proposed exact penalty function. Finally, we use the framework developed for the bound-constrained case to solve the penalized problem. Moreover, we prove that every accumulation point, under standard assumptions on the search directions, of the generated sequence of iterates is a stationary point of the original constrained problem. In the last part of the paper, we report extended numerical results on both bound-constrained and nonlinearly constrained problems, showing that our approach is promising when compared to some state-of-the-art codes from the literature.

Research paper thumbnail of A Derivative-Free Approach to Constrained Multiobjective Nonsmooth Optimization

SIAM Journal on Optimization, 2016

In this work, we consider multiobjective optimization problems with both bound constraints on the... more In this work, we consider multiobjective optimization problems with both bound constraints on the variables and general nonlinear constraints, where objective and constraint function values can only be obtained by querying a black box. We define a linesearch-based solution method, and we show that it converges to a set of Pareto stationary points. To this aim, we carry out a theoretical analysis of the problem by only assuming Lipschitz continuity of the functions; more specifically, we give new optimality conditions that take explicitly into account the bound constraints, and prove that the original problem is equivalent to a bound constrained problem obtained by penalizing the nonlinear constraints with an exact merit function. Finally, we present the results of a numerical experimentation on bound constrained and nonlinearly constrained problems, showing that our approach is promising when compared to a state-of-the-art method from the literature.

Research paper thumbnail of A DIRECT-type approach for derivative-free constrained global optimization

Computational Optimization and Applications, 2016

In the field of global optimization, many efforts have been devoted to globally solving bound con... more In the field of global optimization, many efforts have been devoted to globally solving bound constrained optimization problems without using derivatives. In this paper we consider global optimization problems where both bound and general nonlinear constraints are present. To solve this problem we propose the combined use of a DIRECT-type algorithm with a derivative-free local minimization of a nonsmooth exact penalty function. In particular, we define a new DIRECT-type strategy to explore the search space by explicitly taking into account the twofold nature of the optimization problems, i.e. the global optimization of both the objective function and of a feasibility measure. We report an extensive experimentation on hard test problems to show viability of the approach.

Research paper thumbnail of Smooth Transformation of the Generalized Minimax Problem

Journal of Optimization Theory and Applications, 1997

We consider the generalized minimax problem, that is, the problem of minimizing a function (j>(x)... more We consider the generalized minimax problem, that is, the problem of minimizing a function (j>(x) = F(g 1 (x),..., g m (x)), where F is a smooth function and each gi is the maximum of a finite number of smooth functions. We prove that, under suitable assumptions, it is possible to construct a continuously differentiable exact barrier function, whose minimizers yield the minimizers of the function . In this way, the nonsmooth original problem can be solved by usual minimization techniques for unconstrained differentiable functions.

Research paper thumbnail of Solution of the trust region problem via a smooth unconstrained reformulation

Topics in Semidefinite and Interior-Point Methods, 1998

ABSTRACT

Research paper thumbnail of A nonmonotone GRASP

Mathematical Programming Computation, 2016

A Greedy Randomized Adaptive Search Procedure (GRASP) is an iterative multistart metaheuristic fo... more A Greedy Randomized Adaptive Search Procedure (GRASP) is an iterative multistart metaheuristic for difficult combinatorial optimization problems. Each GRASP iteration consists of two phases: a construction phase, in which a feasible solution is produced, and a local search phase, in which a local optimum in the neighborhood of the constructed solution is sought. Repeated applications of the construction procedure yields different starting solutions for the local search and the best overall solution is kept as the result. The GRASP local search applies iterative improvement until a locally optimal solution is found. During this phase, starting from the current solution an improving neighbor solution is accepted and considered as the new current solution. In this paper, we propose a variant of the GRASP framework that uses a new "nonmonotone" strategy to explore the neighborhood of the current so

Research paper thumbnail of Application of derivative-free multi-objective algorithms to reliability-based robust design optimization of a high-speed catamaran in real ocean environment

Engineering Optimization 2014, 2014

A reliability-based robust design optimization (RBRDO) for ship hulls is presented. A real ocean ... more A reliability-based robust design optimization (RBRDO) for ship hulls is presented. A real ocean environment is considered, including stochastic sea state and speed. The optimization problem has two objectives: (a) the reduction of the expected value of the total resistance in waves and (b) the increase of the ship operability (reliability). Analysis tools include a URANS solver, uncertainty quantification methods and metamodels, developed and validated in earlier research. The design space is defined by an orthogonal fourdimensional representation of shape modifications, based on the Karhunen-Loève expansion of free-form deformations of the original hull. The objective of the present paper is the assessment of deterministic derivative-free multi-objective optimization algorithms for the solution of the RBRDO problem, with focus on multi-objective extensions of the deterministic particle swarm optimization (DPSO) algorithm. Three evaluation metrics provide the assessment of the proximity of the solutions to a reference Pareto front and their wideness.

Research paper thumbnail of Derivative-free global design optimization in ship hydrodynamics by local hybridization

A derivative-free global design optimization of the DTMB 5415 model is presented, using local hyb... more A derivative-free global design optimization of the DTMB 5415 model is presented, using local hybridizations of two global algorithms, DIRECT (DIviding RECTangles) and PSO (Particle Swarm Optimization). The optimization aims at the reduction of the calm-water resistance at Fr = 0.25, using six design variables modifying hull and sonar dome. Simulations are conducted using potential flow with a friction model. Hybrid algorithms show a faster convergence towards the global minimum than the original global methods and are a viable option for design optimization, especially when computationally expensive objective functions are involved. A resistance reduction of 16% was achieved.

Research paper thumbnail of Globally convergent exact penalty algorithms for constrained optimization

Lecture Notes in Control and Information Sciences, 1986

In this paper we define two classes of algorithms for the solution of constrained problems. The f... more In this paper we define two classes of algorithms for the solution of constrained problems. The first class is based on a continuously differentiable exact penalty function, with the additional inclusion of a barrier term. The second class is based on a similar modification performed on a continuously differentiable exact augmented Lagrangian function. In connection with these functions, an automatic

Research paper thumbnail of A superlinearly convergent primal — dual algorithm model for constrained optimization problems with bounded variables

Optimization Methods and Software, 2000

Research paper thumbnail of A New Version of the Price's Algorithm for Global Optimization

Journal of Global Optimization, 1997

We present an algorithm for finding a global minimum of a multimodal,multivariate function whose ... more We present an algorithm for finding a global minimum of a multimodal,multivariate function whose evaluation is very expensive, affected by noise andwhose derivatives are not available. The proposed algorithm is a new version ofthe well known Price's algorithm and its distinguishing feature is that ittries to employ as much as possible the information about the objectivefunction obtained at previous iterates.

Research paper thumbnail of A truncated Newton method for constrained optimization

Applied Optimization, 2000

... A truncated Newton method for constrained optimization Gianni Di Pillo (dipillo@dis.uniroma1.... more ... A truncated Newton method for constrained optimization Gianni Di Pillo (dipillo@dis.uniroma1. it) ... Given two vectors v, w ∈ IRp, the operation max{v, w} is intended component-wise, namely max{v, w} denotes the vector with components max{vi,wi}. ...

Research paper thumbnail of A Truncated Newton Algorithm for Large Scale Box Constrained Optimization

SIAM Journal on Optimization, 2002

A new method for the solution of minimization problems with simple bounds is presented. Global co... more A new method for the solution of minimization problems with simple bounds is presented. Global convergence of a general scheme requiring the approximate solution of a single linear system at each iteration is proved and a superlinear convergence rate is established without requiring the strict complementarity assumption. The algorithm proposed is based on a simple, smooth unconstrained reformulation of the bound constrained problem and may produce a sequence of points that are not feasible. Numerical results are reported.

Research paper thumbnail of Solving the Trust-Region Subproblem using the Lanczos Method

SIAM Journal on Optimization, 1999

Neither the Council nor the Laboratory accept any responsibility for loss or damage arising from ... more Neither the Council nor the Laboratory accept any responsibility for loss or damage arising from the use of information contained in any of their reports or in any communication about their tests or investigations.

Research paper thumbnail of A Derivative-Free Algorithm for Inequality Constrained Nonlinear Programming via Smoothing of an <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi mathvariant="normal">ℓ</mi><mi mathvariant="normal">∞</mi></msub></mrow><annotation encoding="application/x-tex">\ell_\infty</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8444em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord">ℓ</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.1514em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">∞</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span> Penalty Function

SIAM Journal on Optimization, 2009

ABSTRACT

Research paper thumbnail of Exploiting negative curvature directions in linesearch methods for unconstrained optimization

Optimization Methods and Software, 2000

In this paper we consider the definition of new efficient linesearch algorithms for solving large... more In this paper we consider the definition of new efficient linesearch algorithms for solving large scale unconstrained optimization problems which exploit the local nonconvexity of the objective function. Existing algorithms of this class compute, at each iteration, two search directions: a Newton-type direction which ensures a global and fast convergence, and a negative curvature direction which enables the iterates to escape from the region of local nonconvexity. A new point is then generated by performing a movement along a curve obtained by combining these two directions. However, the respective scaling of the directions is typically ignored. We propose a new algorithm which aims to avoid the scaling problem by selecting the more promising of the two directions, and then performs a step along this direction. The selection is based on a test on the rate of decrease of the quadratic model of the objective function. We prove global convergence to second-order critical points for the new algorithm, and report some preliminary numerical results.

Research paper thumbnail of New global optimization methods for ship design problems

Optimization and Engineering, 2009

The aim of this paper is to solve optimal design problems for industrial applications when the ob... more The aim of this paper is to solve optimal design problems for industrial applications when the objective function value requires the evaluation of expensive simulation codes and its first derivatives are not available. In order to achieve this goal we propose two new algorithms that draw inspiration from two existing approaches: a filled function based algorithm and a Particle Swarm Optimization method. In order to test the efficiency of the two proposed algorithms, we perform a numerical comparison both with the methods we drew inspiration from, and with some standard Global Optimization algorithms that are currently adopted in industrial design optimization. Finally, a realistic ship design problem, namely the reduction of the amplitude of the heave motion of a ship advancing in head seas (a problem connected to both safety and comfort), is solved using the new codes and other global and local derivative-This work has been partially supported by the Ministero delle Infrastrutture e dei Trasporti in the framework of the research plan "

Research paper thumbnail of An exact penalty-Lagrangian approach for large-scale nonlinear programming

Optimization, 2011

Nonlinear programming problems with equality constraints and bound constraints on the variables a... more Nonlinear programming problems with equality constraints and bound constraints on the variables are considered. The presence of bound constraints in the definition of the problem is exploited as much as possible. To this aim an efficient search direction is defined which is able to produce a locally and superlinearly convergent algorithm and that can be computed in an efficient way by using a truncated scheme suitable for large scale problems. Then, an exact merit function is considered whose analytical expression again exploits the particular structure of the problem, by using an exact augmented Lagrangian approach for equality constraints and an exact penalty approach for the bound constraints. It is proved that the search direction and the merit function have some strong connections which can be the basis to define a globally convergent algorithm with superlinear convergence rate for the solution of the constrained problem.

Research paper thumbnail of Convergence to Second Order Stationary Points in Inequality Constrained Optimization

Mathematics of Operations Research, 1998

We propose a new algorithm for the nonlinear inequality constrained minimization problem, and pro... more We propose a new algorithm for the nonlinear inequality constrained minimization problem, and prove that it generates a sequence converging to points satisfying the KKT second order necessary conditions for optimality. The algorithm is a line search algorithm using directions of negative curvature and it can be viewed as a nontrivial extension of corresponding known techniques from unconstrained to constrained problems. The main tools employed in the definition and in the analysis of the algorithm are a differentiable exact penalty function and results from the theory of LC1 functions.

Research paper thumbnail of A multi-objective DIRECT algorithm for ship hull optimization

Computational Optimization and Applications, 2017

The paper is concerned with black-box nonlinear constrained multi-objective optimization problems... more The paper is concerned with black-box nonlinear constrained multi-objective optimization problems. Our interest is the definition of a multi-objective deterministic partition-based algorithm. The main target of the proposed algorithm is the solution of a real ship hull optimization problem. To this purpose and in pursuit of an efficient method, we develop an hybrid algorithm by coupling a multi-objective DIRECT-type algorithm with an efficient derivative-free local algorithm. The results obtained on a set of "hard" nonlinear constrained multi-objective test problems show viability of the proposed approach. Results on a hull-form optimization of a high-speed catamaran (sailing in head waves in the North Pacific Ocean) are also presented. In order to consider a real ocean environment, stochastic sea state and speed are taken into account. The problem is formulated as a multi-objective optimization aimed at (i) the reduction of the expected value of the mean total resistance in irregular head waves, at variable speed and (ii) the increase of the ship operability, with respect to a set of motion-related constraints. We show that the hybrid method performs well also on this industrial problem.

Research paper thumbnail of A Linesearch-Based Derivative-Free Approach for Nonsmooth Constrained Optimization

SIAM Journal on Optimization, 2014

In this paper, we propose new linesearch-based methods for nonsmooth constrained optimization pro... more In this paper, we propose new linesearch-based methods for nonsmooth constrained optimization problems when first-order information on the problem functions is not available. In the first part, we describe a general framework for bound-constrained problems and analyze its convergence toward stationary points, using the Clarke-Jahn directional derivative. In the second part, we consider inequality constrained optimization problems where both objective function and constraints can possibly be nonsmooth. In this case, we first split the constraints into two subsets: difficult general nonlinear constraints and simple bound constraints on the variables. Then, we use an exact penalty function to tackle the difficult constraints and we prove that the original problem can be reformulated as the bound-constrained minimization of the proposed exact penalty function. Finally, we use the framework developed for the bound-constrained case to solve the penalized problem. Moreover, we prove that every accumulation point, under standard assumptions on the search directions, of the generated sequence of iterates is a stationary point of the original constrained problem. In the last part of the paper, we report extended numerical results on both bound-constrained and nonlinearly constrained problems, showing that our approach is promising when compared to some state-of-the-art codes from the literature.

Research paper thumbnail of A Derivative-Free Approach to Constrained Multiobjective Nonsmooth Optimization

SIAM Journal on Optimization, 2016

In this work, we consider multiobjective optimization problems with both bound constraints on the... more In this work, we consider multiobjective optimization problems with both bound constraints on the variables and general nonlinear constraints, where objective and constraint function values can only be obtained by querying a black box. We define a linesearch-based solution method, and we show that it converges to a set of Pareto stationary points. To this aim, we carry out a theoretical analysis of the problem by only assuming Lipschitz continuity of the functions; more specifically, we give new optimality conditions that take explicitly into account the bound constraints, and prove that the original problem is equivalent to a bound constrained problem obtained by penalizing the nonlinear constraints with an exact merit function. Finally, we present the results of a numerical experimentation on bound constrained and nonlinearly constrained problems, showing that our approach is promising when compared to a state-of-the-art method from the literature.

Research paper thumbnail of A DIRECT-type approach for derivative-free constrained global optimization

Computational Optimization and Applications, 2016

In the field of global optimization, many efforts have been devoted to globally solving bound con... more In the field of global optimization, many efforts have been devoted to globally solving bound constrained optimization problems without using derivatives. In this paper we consider global optimization problems where both bound and general nonlinear constraints are present. To solve this problem we propose the combined use of a DIRECT-type algorithm with a derivative-free local minimization of a nonsmooth exact penalty function. In particular, we define a new DIRECT-type strategy to explore the search space by explicitly taking into account the twofold nature of the optimization problems, i.e. the global optimization of both the objective function and of a feasibility measure. We report an extensive experimentation on hard test problems to show viability of the approach.

Research paper thumbnail of Smooth Transformation of the Generalized Minimax Problem

Journal of Optimization Theory and Applications, 1997

We consider the generalized minimax problem, that is, the problem of minimizing a function (j>(x)... more We consider the generalized minimax problem, that is, the problem of minimizing a function (j>(x) = F(g 1 (x),..., g m (x)), where F is a smooth function and each gi is the maximum of a finite number of smooth functions. We prove that, under suitable assumptions, it is possible to construct a continuously differentiable exact barrier function, whose minimizers yield the minimizers of the function . In this way, the nonsmooth original problem can be solved by usual minimization techniques for unconstrained differentiable functions.

Research paper thumbnail of Solution of the trust region problem via a smooth unconstrained reformulation

Topics in Semidefinite and Interior-Point Methods, 1998

ABSTRACT

Research paper thumbnail of A nonmonotone GRASP

Mathematical Programming Computation, 2016

A Greedy Randomized Adaptive Search Procedure (GRASP) is an iterative multistart metaheuristic fo... more A Greedy Randomized Adaptive Search Procedure (GRASP) is an iterative multistart metaheuristic for difficult combinatorial optimization problems. Each GRASP iteration consists of two phases: a construction phase, in which a feasible solution is produced, and a local search phase, in which a local optimum in the neighborhood of the constructed solution is sought. Repeated applications of the construction procedure yields different starting solutions for the local search and the best overall solution is kept as the result. The GRASP local search applies iterative improvement until a locally optimal solution is found. During this phase, starting from the current solution an improving neighbor solution is accepted and considered as the new current solution. In this paper, we propose a variant of the GRASP framework that uses a new "nonmonotone" strategy to explore the neighborhood of the current so

Research paper thumbnail of Application of derivative-free multi-objective algorithms to reliability-based robust design optimization of a high-speed catamaran in real ocean environment

Engineering Optimization 2014, 2014

A reliability-based robust design optimization (RBRDO) for ship hulls is presented. A real ocean ... more A reliability-based robust design optimization (RBRDO) for ship hulls is presented. A real ocean environment is considered, including stochastic sea state and speed. The optimization problem has two objectives: (a) the reduction of the expected value of the total resistance in waves and (b) the increase of the ship operability (reliability). Analysis tools include a URANS solver, uncertainty quantification methods and metamodels, developed and validated in earlier research. The design space is defined by an orthogonal fourdimensional representation of shape modifications, based on the Karhunen-Loève expansion of free-form deformations of the original hull. The objective of the present paper is the assessment of deterministic derivative-free multi-objective optimization algorithms for the solution of the RBRDO problem, with focus on multi-objective extensions of the deterministic particle swarm optimization (DPSO) algorithm. Three evaluation metrics provide the assessment of the proximity of the solutions to a reference Pareto front and their wideness.

Research paper thumbnail of Derivative-free global design optimization in ship hydrodynamics by local hybridization

A derivative-free global design optimization of the DTMB 5415 model is presented, using local hyb... more A derivative-free global design optimization of the DTMB 5415 model is presented, using local hybridizations of two global algorithms, DIRECT (DIviding RECTangles) and PSO (Particle Swarm Optimization). The optimization aims at the reduction of the calm-water resistance at Fr = 0.25, using six design variables modifying hull and sonar dome. Simulations are conducted using potential flow with a friction model. Hybrid algorithms show a faster convergence towards the global minimum than the original global methods and are a viable option for design optimization, especially when computationally expensive objective functions are involved. A resistance reduction of 16% was achieved.

Research paper thumbnail of Globally convergent exact penalty algorithms for constrained optimization

Lecture Notes in Control and Information Sciences, 1986

In this paper we define two classes of algorithms for the solution of constrained problems. The f... more In this paper we define two classes of algorithms for the solution of constrained problems. The first class is based on a continuously differentiable exact penalty function, with the additional inclusion of a barrier term. The second class is based on a similar modification performed on a continuously differentiable exact augmented Lagrangian function. In connection with these functions, an automatic

Research paper thumbnail of A superlinearly convergent primal — dual algorithm model for constrained optimization problems with bounded variables

Optimization Methods and Software, 2000

Research paper thumbnail of A New Version of the Price's Algorithm for Global Optimization

Journal of Global Optimization, 1997

We present an algorithm for finding a global minimum of a multimodal,multivariate function whose ... more We present an algorithm for finding a global minimum of a multimodal,multivariate function whose evaluation is very expensive, affected by noise andwhose derivatives are not available. The proposed algorithm is a new version ofthe well known Price's algorithm and its distinguishing feature is that ittries to employ as much as possible the information about the objectivefunction obtained at previous iterates.

Research paper thumbnail of A truncated Newton method for constrained optimization

Applied Optimization, 2000

... A truncated Newton method for constrained optimization Gianni Di Pillo (dipillo@dis.uniroma1.... more ... A truncated Newton method for constrained optimization Gianni Di Pillo (dipillo@dis.uniroma1. it) ... Given two vectors v, w ∈ IRp, the operation max{v, w} is intended component-wise, namely max{v, w} denotes the vector with components max{vi,wi}. ...

Research paper thumbnail of A Truncated Newton Algorithm for Large Scale Box Constrained Optimization

SIAM Journal on Optimization, 2002

A new method for the solution of minimization problems with simple bounds is presented. Global co... more A new method for the solution of minimization problems with simple bounds is presented. Global convergence of a general scheme requiring the approximate solution of a single linear system at each iteration is proved and a superlinear convergence rate is established without requiring the strict complementarity assumption. The algorithm proposed is based on a simple, smooth unconstrained reformulation of the bound constrained problem and may produce a sequence of points that are not feasible. Numerical results are reported.

Research paper thumbnail of Solving the Trust-Region Subproblem using the Lanczos Method

SIAM Journal on Optimization, 1999

Neither the Council nor the Laboratory accept any responsibility for loss or damage arising from ... more Neither the Council nor the Laboratory accept any responsibility for loss or damage arising from the use of information contained in any of their reports or in any communication about their tests or investigations.

Research paper thumbnail of A Derivative-Free Algorithm for Inequality Constrained Nonlinear Programming via Smoothing of an <span class="katex"><span class="katex-mathml"><math xmlns="http://www.w3.org/1998/Math/MathML"><semantics><mrow><msub><mi mathvariant="normal">ℓ</mi><mi mathvariant="normal">∞</mi></msub></mrow><annotation encoding="application/x-tex">\ell_\infty</annotation></semantics></math></span><span class="katex-html" aria-hidden="true"><span class="base"><span class="strut" style="height:0.8444em;vertical-align:-0.15em;"></span><span class="mord"><span class="mord">ℓ</span><span class="msupsub"><span class="vlist-t vlist-t2"><span class="vlist-r"><span class="vlist" style="height:0.1514em;"><span style="top:-2.55em;margin-left:0em;margin-right:0.05em;"><span class="pstrut" style="height:2.7em;"></span><span class="sizing reset-size6 size3 mtight"><span class="mord mtight">∞</span></span></span></span><span class="vlist-s">​</span></span><span class="vlist-r"><span class="vlist" style="height:0.15em;"><span></span></span></span></span></span></span></span></span></span> Penalty Function

SIAM Journal on Optimization, 2009

ABSTRACT

Research paper thumbnail of Exploiting negative curvature directions in linesearch methods for unconstrained optimization

Optimization Methods and Software, 2000

In this paper we consider the definition of new efficient linesearch algorithms for solving large... more In this paper we consider the definition of new efficient linesearch algorithms for solving large scale unconstrained optimization problems which exploit the local nonconvexity of the objective function. Existing algorithms of this class compute, at each iteration, two search directions: a Newton-type direction which ensures a global and fast convergence, and a negative curvature direction which enables the iterates to escape from the region of local nonconvexity. A new point is then generated by performing a movement along a curve obtained by combining these two directions. However, the respective scaling of the directions is typically ignored. We propose a new algorithm which aims to avoid the scaling problem by selecting the more promising of the two directions, and then performs a step along this direction. The selection is based on a test on the rate of decrease of the quadratic model of the objective function. We prove global convergence to second-order critical points for the new algorithm, and report some preliminary numerical results.

Research paper thumbnail of New global optimization methods for ship design problems

Optimization and Engineering, 2009

The aim of this paper is to solve optimal design problems for industrial applications when the ob... more The aim of this paper is to solve optimal design problems for industrial applications when the objective function value requires the evaluation of expensive simulation codes and its first derivatives are not available. In order to achieve this goal we propose two new algorithms that draw inspiration from two existing approaches: a filled function based algorithm and a Particle Swarm Optimization method. In order to test the efficiency of the two proposed algorithms, we perform a numerical comparison both with the methods we drew inspiration from, and with some standard Global Optimization algorithms that are currently adopted in industrial design optimization. Finally, a realistic ship design problem, namely the reduction of the amplitude of the heave motion of a ship advancing in head seas (a problem connected to both safety and comfort), is solved using the new codes and other global and local derivative-This work has been partially supported by the Ministero delle Infrastrutture e dei Trasporti in the framework of the research plan "

Research paper thumbnail of An exact penalty-Lagrangian approach for large-scale nonlinear programming

Optimization, 2011

Nonlinear programming problems with equality constraints and bound constraints on the variables a... more Nonlinear programming problems with equality constraints and bound constraints on the variables are considered. The presence of bound constraints in the definition of the problem is exploited as much as possible. To this aim an efficient search direction is defined which is able to produce a locally and superlinearly convergent algorithm and that can be computed in an efficient way by using a truncated scheme suitable for large scale problems. Then, an exact merit function is considered whose analytical expression again exploits the particular structure of the problem, by using an exact augmented Lagrangian approach for equality constraints and an exact penalty approach for the bound constraints. It is proved that the search direction and the merit function have some strong connections which can be the basis to define a globally convergent algorithm with superlinear convergence rate for the solution of the constrained problem.

Research paper thumbnail of Convergence to Second Order Stationary Points in Inequality Constrained Optimization

Mathematics of Operations Research, 1998

We propose a new algorithm for the nonlinear inequality constrained minimization problem, and pro... more We propose a new algorithm for the nonlinear inequality constrained minimization problem, and prove that it generates a sequence converging to points satisfying the KKT second order necessary conditions for optimality. The algorithm is a line search algorithm using directions of negative curvature and it can be viewed as a nontrivial extension of corresponding known techniques from unconstrained to constrained problems. The main tools employed in the definition and in the analysis of the algorithm are a differentiable exact penalty function and results from the theory of LC1 functions.