Combining meta-EAs and racing for difficult EA parameter tuning tasks (original) (raw)
Related papers
A hybrid approach to parameter tuning in genetic algorithms
2005 Ieee Congress on Evolutionary Computation Vols 1 3 Proceedings, 2005
Choosing the best parameter setting is a wellknown important and challenging task in Evolutionary Algorithms (EAs). As one of the earliest parameter tuning techniques, the Meta-EA approach regards each parameter as a variable and the performance of algorithm as the fitness value and conducts searching on this landscape using various genetic operators. However, there are some inherent issues in this method. For example, some algorithm parameters are generally not searchable because it is difficult to define any sensible distance metric on them. In this paper, a novel approach is proposed by combining the Meta-EA approach with a method called Racing, which is based on the statistical analysis of algorithm performance with different parameter settings. A series of experiments are conducted to show the reliability and efficiency of this hybrid approach in tuning Genetic Algorithms (GAs) on two benchmark problems.
Statistical Racing Techniques for Improved Empirical Evaluation of Evolutionary Algorithms
Lecture Notes in Computer Science, 2004
In empirical studies of Evolutionary Algorithms, it is usually desirable to evaluate and compare algorithms using as many different parameter settings and test problems as possible, in order to have a clear and detailed picture of their performance. Unfortunately, the total number of experiments required may be very large, which often makes such research work computationally prohibitive. In this paper, the application of a statistical method called racing is proposed as a general-purpose tool to reduce the computational requirements of large-scale experimental studies in evolutionary algorithms. Experimental results are presented that show that racing typically requires only a small fraction of the cost of an exhaustive experimental study.
An Adaptive Approach to Controlling Parameters of Evolutionary Algorithms
This chapter presents a general background on Evolutionary Algorithms (EA), the different parameters and methods for configuring EA parameters. First, we give a brief introduction to singleobjective and multiobjective optimisation, followed by a more in-depth description of how optimisation is performed in different classes of Evolutionary Algorithms (Genetic Algorithm, Evolution Strategies, Genetic Programming, Evolutionary Programming) and the strategy parameters that are used in each of the different algorithm classes.
Parameter control in evolutionary algorithms
IEEE Transactions on Evolutionary Computation, 1999
The issue of setting the values of various parameters of an evolutionary algorithm is crucial for good performance. In this paper we discuss how to do this, beginning with the issue of whether these values are best set in advance or are best changed during evolution. We provide a classification of different approaches based on a number of complementary features, and pay special attention to setting parameters on-the-fly. This has the potential of adjusting the algorithm to the problem while solving the problem. This paper is intended to present a survey rather than a set of prescriptive details for implementing an EA for a particular type of problem. For this reason we have chosen to interleave a number of examples throughout the text. Thus we hope to both clarify the points we wish to raise as we present them, and also to give the reader a feel for some of the many possibilities available for controlling different parameters.
Costs and Benefits of Tuning Parameters of Evolutionary Algorithms
Parallel Problem Solving from Nature, PPSN X
We present an empirical study on the impact of different design choices on the performance of an evolutionary algorithm (EA). Four EA components are considered-parent selection, survivor selection, recombination and mutation-and for each component we study the impact of choosing the right operator and of tuning its free parameter(s). We tune 120 different combinations of EA operators to 4 different classes of fitness landscapes and measure the cost of tuning. We find that components differ greatly in importance. Typically the choice of operator for parent selection has the greatest impact, and mutation needs the most tuning. Regarding individual EAs however, the impact of design choices for one component depends on the choices for other components, as well as on the available amount of resources for tuning.
New Ways to Calibrate Evolutionary Algorithms
2008
The issue of setting the values of various parameters of an evolutionary algorithm (EA) is crucial for good performance. One way to do it is controlling EA parameters on-the-fly, which can be done in various ways and for various parameters. We briefly review these options in general and present the findings of a literature search and some statistics about the most popular options. Thereafter, we provide three case studies indicating a high potential for uncommon variants. In particular, we recommend to focus on parameters regulating selection and population size, rather than those concerning crossover and mutation. On the technical side, the case study on adjusting tournament size shows by example that global parameters can also be self-adapted, and that heuristic adaptation and pure self-adaptation can be successfully combined into a hybrid of the two.
Journal of optimization, 2017
Usually, metaheuristic algorithms are adapted to a large set of problems by applying few modifications on parameters for each specific case. However, this flexibility demands a huge effort to correctly tune such parameters. Therefore, the tuning of metaheuristics arises as one of the most important challenges in the context of research of these algorithms. Thus, this paper aims to present a methodology combining Statistical and Artificial Intelligence methods in the fine-tuning of metaheuristics. The key idea is a heuristic method, called Heuristic Oriented Racing Algorithm (HORA), which explores a search space of parameters looking for candidate configurations close to a promising alternative. To confirm the validity of this approach, we present a case study for finetuning two distinct metaheuristics: Simulated Annealing (SA) and Genetic Algorithm (GA), in order to solve the classical traveling salesman problem. The results are compared considering the same metaheuristics tuned through a racing method. Broadly, the proposed approach proved to be effective in terms of the overall time of the tuning process. Our results reveal that metaheuristics tuned by means of HORA achieve, with much less computational effort, similar results compared to the case when they are tuned by the other fine-tuning approach.
1999
It is often the case in many problems in science and engineering that the analysis codes used are computationally very expensive. This can pose a serious impediment to the successful application of evolutionary optimization techniques. Metamodeling techniques present an enabling methodology for reducing the computational cost of such optimization problems. We present here a general framework for coupling metamodeling techniques with evolutionary algorithms to reduce the computational burden of solving this class of optimization problems. This framework aims to balance the concerns of optimization with that of design of experiments. Experiments on test problems and a practical engineering design problem serve to illustrate our arguments. The practical limitations of this approach are also outlined.
A method for parameter calibration and relevance estimation in evolutionary algorithms
Genetic and Evolutionary Computation Conference, GECCO'06
We present and evaluate a method for estimating the relevance and calibrating the values of parameters of an evolutionary algorithm. The method provides an information theoretic measure on how sensitive a parameter is to the choice of its value. This can be used to estimate the relevance of parameters, to choose between different possible sets of parameters, and to allocate resources to the calibration of relevant parameters. The method calibrates the evolutionary algorithm to reach a high performance, while retaining a maximum of robustness and generalizability. We demonstrate the method on an agent-based application from evolutionary economics and show how the method helps to design an evolutionary algorithm that allows the agents to achieve a high welfare with a minimum of algorithmic complexity.