Asynchronous Parallel (1+1)-CMA-ES for Constrained Global Optimisation (original) (raw)

A modified Covariance Matrix Adaptation Evolution Strategy with adaptive penalty function and restart for constrained optimization

Expert Systems with Applications, 2014

In the last decades, a number of novel meta-heuristics and hybrid algorithms have been proposed to solve a great variety of optimization problems. Among these, constrained optimization problems are considered of particular interest in applications from many different domains. The presence of multiple constraints can make optimization problems particularly hard to solve, thus imposing the use of specific techniques to handle fitness landscapes which generally show complex properties. In this paper, we introduce a modified Covariance Matrix Adaptation Evolution Strategy (CMA-ES) specifically designed for solving constrained optimization problems. The proposed method makes use of the restart mechanism typical of most modern variants of CMA-ES, and handles constraints by means of an adaptive penalty function. This novel CMA-ES scheme presents competitive results on a broad set of benchmark functions and engineering problems, outperforming most state-of-the-art algorithms as for both efficiency and constraint handling.

Reducing the Time Complexity of the Derandomized Evolution Strategy with Covariance Matrix Adaptation (CMA-ES

Evolutionary Computation, 2003

This paper presents a novel evolutionary optimization strategy based on the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). This new approach is intended to reduce the number of generations required for convergence to the optimum. Reducing the number of generations, i.e., the time complexity of the algorithm, is important if a large population size is desired: (1) to reduce the effect of noise;

Surrogate Constraint Functions for CMA Evolution Strategies

Many practical optimization problems are constrained black boxes. Covariance Matrix Adaptation Evolution Strategies (CMA-ES) belong to the most successful black box optimization methods. Up to now no sophisticated constraint handling method for Covariance Matrix Adaptation optimizers has been proposed. In our novel approach we learn a meta-model of the constraint function and use this surrogate model to adapt the covariance matrix during the search at the vicinity of the constraint boundary. The meta-model can be used for various purposes, i.e. rotation of the mutation ellipsoid, checking the feasibility of candidate solutions or repairing infeasible mutations by projecting them onto the constraint surrogate function. Experimental results show the potentials of the proposed approach.

Asynchronous Master-Slave Parallelization of Differential Evolution for Multi-Objective Optimization

Evolutionary Computation, 2013

In this paper, we present AMS-DEMO, an asynchronous master-slave implementation of DEMO, an evolutionary algorithm for multiobjective optimization. AMS-DEMO was designed for solving time-demanding problems efficiently on both homogeneous and heterogeneous parallel computer architectures. The algorithm is used as a test case for the asynchronous master-slave parallelization of multiobjective optimization that has not yet been thoroughly investigated. Selection lag is identified as the key property of the parallelization method, which explains how its behavior depends on the type of computer architecture and the number of processors. It is arrived at analytically and from the empirical results. AMS-DEMO is tested on a benchmark problem and a time-demanding industrial optimization problem, on homogeneous and heterogeneous parallel setups, providing performance results for the algorithm and an insight into the parallelization method. A comparison is also performed between AMS-DEMO and generational master-slave DEMO to demonstrate how the asynchronous parallelization method enhances the algorithm and what benefits it brings compared to the synchronous method.

An adaptive penalty based covariance matrix adaptation–evolution strategy

Computers & Operations Research, 2013

Although most of unconstrained optimization problems with moderate to high dimensions can be easily handled with Evolutionary Computation (EC) techniques, constraint optimization problems (COPs) with inequality and equality constraints are very hard to deal with. Despite the fact that only equality constraints can be used to eliminate a certain variable, both types of constraints implicitly enforce a relation between problem variables. Most conventional constraint handling methods in EC do not consider the correlations between problem variables imposed by the problem constraints. This paper relies on the idea that a proper genetic operator, which captures mentioned implicit correlations, can improve performance of evolutionary constrained optimization algorithms. With this in mind, we employ a (μ+λ)-Evolution Strategy with a simplified variant of Covariance Matrix Adaptation based mutation operator along an adaptive weight adjustment scheme. The proposed algorithm is tested on two test sets. The outperformance of the algorithm is significant on the first benchmark when compared with five conventional methods. The results on the second test set show that algorithm is highly competitive when benchmarked with three state-of-art algorithms. The main drawback of the algorithm is its slightly lower speed of convergence for problems with high dimension and/or large search domain.

Cooperative co-evolution with sensitivity analysis-based budget assignment strategy for large-scale global optimization

Applied Intelligence, 2017

Cooperative co-evolution has proven to be a successful approach for solving large-scale global optimization (LSGO) problems. These algorithms decompose the LSGO problems into several smaller subcomponents using a decomposition method, and each subcomponent of the variables is optimized by a certain optimizer. They use a simple technique, the round-robin method, to equally assign the computational time. Since the standard cooperative co-evolution algorithms allocate the computational budget equally, the performance of these algorithms deteriorates for solving LSGO problems with subcomponents by various effects on the objective function. For this reason, it could be very useful to detect the subcomponents' effects on the objective function in LSGO problems. Sensitivity analysis methods can be employed to identify the most significant variables of a model. In this paper, we propose a cooperative co-evolution algorithm with a sensitivity analysis-based budget assignment method (SACC), which can allocate the computational time among all subcomponents based on their different effects on the objective function, accordingly.

Properties and numerical testing of a parallel global optimization algorithm

Numerical Algorithms, 2012

In the framework of multistart and local search algorithms that find the global minimum of a real function f (x), x ∈ S ⊆ R n , Gaviano et alias proposed a rule for deciding, as soon as a local minimum has been found, whether to perform or not a new local minimization. That rule was designed to minimize the average local computational cost eval 1 (· ) required to move from the current local minimum to a new one. In this paper the expression of the cost eval 2 (· ) of the entire process of getting a global minimum is found and investigated; it is shown that eval 2 (· ) has among its components eval 1 (· ) and can be only monotonically increasing or decreasing; that is, it exhibits the same property of eval 1 (· ). Moreover, a counterexample is given that shows that the optimal values given by eval 1 (· ) and eval 2 (· ) might not agree. Further, computational experiments of a parallel algorithm that uses the above rule are carried out in a MatLab environment.

A CMA-ES-based 2-stage memetic framework for solving constrained optimization problems

2014 IEEE Symposium on Foundations of Computational Intelligence (FOCI), 2014

Constraint optimization problems play a crucial role in many application domains, ranging from engineering design to finance and logistics. Specific techniques are therefore needed to handle complex fitness landscapes characterized by multiple constraints. In the last decades, a number of novel metaheuristics have been applied to constraint optimization. Among these, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) has been attracting lately the most attention of researchers. Recent variants of CMA-ES showed promising results on several benchmarks and practical problems. In this paper, we attempt to improve the performance of an adaptive penalty CMA-ES recently proposed in the literature. We build upon it a 2-stage memetic framework, coupling the CMA-ES scheme with a local optimizer, so that the best solution found by CMA-ES is used as starting point for the local search. We test, separately, the use of three classic local search algorithms (Simplex, BOBYQA, and L-BFGS-B), and we compare the baseline scheme (without local search) and its three memetic variants with some of the state-of-the-art methods for constrained optimization.

Large Scale Global Optimization using Differential Evolution with self-adaptation and cooperative co-evolution

2008

In this paper, an optimization algorithm is formulated and its performance assessment for large scale global optimization is presented. The proposed algorithm is named DEwSAcc and is based on Differential Evolution (DE) algorithm, which is a floating-point encoding evolutionary algorithm for global optimization over continuous spaces. The original DE is extended by log-normal self-adaptation of its control parameters and combined with cooperative co-evolution as a dimension decomposition mechanism. Experimental results are given for seven high-dimensional test functions proposed for the Special Session on Large Scale Global Optimization at 2008 IEEE World Congress on Computational Intelligence. I. INTRODUCTION T HE objective of global optimization is to find the search parameter values x = {x 1 , x 2 , ..., x D } (D being dimensionality of a problem tackled) such that the evaluation function value f (x) is optimal, i.e. x has better evaluation function value than any other parameter setting. In this paper, we will treat optimization task as a minimization. Most reported studies on differential evolution (DE) are performed using low-dimensional problems, e.g., smaller than 100, which are relatively small for many real-world problems [1]. This paper gives performance analysis of a new variant of DE algorithm, on large-scale (100, 500, and 1000 dimension) test functions. Test functions were prepared for the Special Session on Large Scale Global Optimization at 2008 IEEE World Congress on Computational Intelligence [2]. Test function set includes well known functions, included in previous benchmark function suites [3], [4]. The new variant of DE is based on Differential Evolution (DE) algorithm, which is a floating-point encoding evolutionary algorithm for global optimization over continuous spaces. The original DE is extended by log-normal selfadaptation of its control parameters and combined with cooperative co-evolution as a dimension decomposition mechanism. We have already used a similar differential evolution self-adaptation mechanism (without cooperative co-evolution mechanism) for multiobjective optimization in [5]. This work provides the following contributions: (1) an application of a self-adaptive mechanism from evolution strategies and cooperative co-evolution mechanism for dimension

A Cost-Benefit Local Search Coordination in Multimeme Differential Evolution for Constrained Numerical Optimization Problems

Swarm and Evolutionary Computation, 2018

This paper introduces a memetic Differential Evolution Approach with an adap-tive local search coordination to solve constrained numerical optimization problems. The proposed approach associates a set of different direct local search operators which are included within the standard Differential Evolution. The coordination mechanism consists of a probabilistic method based on a cost-benefit scheme, and it is aimed to regulate the activation probability of every local search operator during the evolutionary cycle of the global search. Also, the method adopts the ε-constrained method as a constraint-handling technique. The proposed approach is tested on thirty-six well-known benchmark problems. Numerical results show that the proposed method is suitable to coordinate a set of local search operators adequately within a memetic Differential Evolution scheme for constrained search spaces.