A Novel Particle Swarm Optimization Algorithm for Non-Separable and Ill-Conditioned Problems (original) (raw)
Related papers
The optimization of a system is the quest for its best performance. Thus physical systems tend to reach minimum energy states, while biological systems optimize their own genetic code and behaviour to better cope with the environment. If an artificial system or the model of a natural system can be parameterized, its optimization consists of seeking the best combination of feasible values of those parameters which results in its best performance. This thesis deals with the ‘particle swarm optimization’ method, which differs from traditional methods in that it poses no restriction to the functions involved. The method was inspired by social behaviour observed in nature, and hence its robustness lies in that it is not deterministically implemented to optimize so that there is no problem-specific implementation that may be inadequate for a different problem. An extensive study of the coefficients at the core of the method is carried out partly theoretically, partly heuristically, and partly visualizing trajectories. The influence of their settings on the form and speed of convergence is analyzed, and guidelines as to how to obtain the desired behaviour are provided. Different structures of the social network and additional heuristics are studied and tested on unconstrained benchmark problems. Finally, a robust pseudo adaptive constraint-handling mechanism is proposed. The fully working algorithm is tested on a classical benchmark suite of constrained problems and successfully applied to well-known engineering problems. Results reported by other authors are also provided for reference. The ‘particle swarm optimizer’ developed in this thesis is a global, single-solution, single-objective, gradient-free, population-based method, which is able to handle continuous –exceptionally, discrete variables by rounding-off–, constrained and unconstrained, single-objective problems regardless of whether the functions involved are or are not linear, convex, unimodal, differentiable, smooth, continuous, or even explicit.
(2006) Particle Swarm Optimization: Development of a General-Purpose Optimizer
For problems where the quality of any solution can be quantified in a numerical value, optimization is the process of finding the permitted combination of variables in the problem that optimizes that value. Traditional methods present a very restrictive range of applications, mainly limited by the features of the function to be optimized and of the constraint functions. In contrast, evolutionary algorithms present almost no restriction to the features of these functions, although the most appropriate constraint-handling technique is still an open question. The particle swarm optimization (PSO) method is sometimes viewed as another evolutionary algorithm because of their many similarities, despite not being inspired by the same metaphor. Namely, they evolve a population of individuals taking into consideration previous experiences and using stochastic operators to introduce new responses. The advantages of evolutionary algorithms with respect to traditional methods have been greatly discussed in the literature for decades. While all such advantages are valid when comparing the PSO paradigm to traditional methods, its main advantages with respect to evolutionary algorithms consist of its noticeably lower computational cost and easier implementation. In fact, the plain version can be programmed in a few lines of code, involving no operator design and few parameters to be tuned. This paper deals with three important aspects of the method: the influence of the parameters’ tuning on the behaviour of the system; the design of stopping criteria so that the reliability of the solution found can be somehow estimated and computational cost can be saved; and the development of appropriate techniques to handle constraints, given that the original method is designed for unconstrained optimization problems.
Optimization is a multi-disciplinary field concerning mathematicians, physicists, economists, biologists and engineers, among others. Some of the problems they have to face are inherently optimization problems (e.g. travelling salesman problem, scheduling, structural optimal design), while some others can be solved as if they were, by minimizing a conveniently defined error function (e.g. systems of equations, training of artificial neural networks). Although all these problems involve optimizing some pre-defined criterion, they can be very different from one another. For instance, there are a finite number of solutions for discrete and combinatorial problems, whereas there are infinite solutions for continuous problems; the optimum’s location, or its value, may be either static or dynamic; the problem may be singleobjective or multi-objective, etc. This thesis is concerned with continuous, static, and singleobjective optimization problems, subject to inequality constraints only. Nevertheless, some methods to handle other kinds of problems are briefly reviewed in SECTION I. The “particle swarm optimization” paradigm was inspired by previous simulations of the cooperative behaviour observed in social beings. It is a bottom-up, randomly weighted, population-based method whose ability to optimize emerges from local, individual-toindividual interactions. As opposed to traditional methods, it can deal with different kinds of problems with few or no adaptation due to the fact that it does not optimize by profiting from problem-specific features of the problem at issue but by means of a parallel, cooperative exploration of the search-space carried out by a population of individuals. The main goal of this thesis consists of developing an optimizer that can perform reasonably well on most problems. Hence, the influence of the settings of the algorithm’s parameters on the behaviour of the system is studied, some general-purpose settings are sought, and some variations to the canonical version are proposed aiming to turn it into a more general-purpose optimizer. Since no termination condition is included in the canonical version, this thesis is also concerned with the design of some stopping criteria which allow the iterative search to be terminated if further significant improvement is unlikely, or if a certain number of time-steps are reached. In addition to that, some constraint-handling techniques are incorporated into the canonical algorithm in order to handle inequality constraints. Finally, the capabilities of the proposed general-purpose optimizers are illustrated by optimizing a few benchmark problems.
Overview of Particle Swarm Optimization ( PSO ) on its Applications and Methods
2013
Particle Swarm Optimization (PSO) that is famous as a heuristic robust stochastic optimization technique works in field of Artificial Intelligence (AI). This technique of optimization is inspired by certain behaviors of animals such as bird flocking. The base of PSO method is on swarm intelligence that has a huge effect on solving problem in social communication. Hence, the PSO is a useful and valuable technique with goal of maximizing or minimizing of certain value that has been used in wide area and different fields such as large field of engineering, physics, mathematics, chemistry and etc. in this paper, following a brief introduction to the PSO algorithm, the method of that is presented and it’s important factors and parameters are summarized. The main aim of this paper is to overview, discuss of the available literature of the PSO algorithm yearly.
Particle Swarm Optimization Performance for Unconstrained Optimization Problems
2006
Particle swarm Optimization (PSO) is mainly inspired by social behavior patterns of organisms that live and interact within large groups. The term PSO refers to a relatively new family of algorithms that is used to find optimal or near to optimal solutions to numerical and qualitative problems. It is an optimization paradigm that simulates the ability of human to process knowledge. The capability of PSO method to address the maximization and minimization unconstrained problems is investigated through numerous experiments on different test problems. Results obtained are reported. The two variants PSO-IW (Inertia Weight) and PSO-IC (Inertia weight and Constriction factor) are used for the experiments. Conclusions are derived. These variants exhibit different performance for different test problems.
Particle swarm optimization: performance tuning and empirical analysis
2009
This chapter presents some of the recent modified variants of Particle Swarm Optimization (PSO). The main focus is on the design and implementation of the modified PSO based on diversity, Mutation, Crossover and efficient Initialization using different distributions and Low-discrepancy sequences. These algorithms are applied to various benchmark problems including unimodal, multimodal, noisy functions and real life applications in engineering fields. The effectiveness of the algorithms is discussed.
CHPSO-A new collaborative hybrid particle swarm optimization algorithm
2014 9th IEEE Conference on Industrial Electronics and Applications, 2014
Particle Swarm optimization (PSO) is a powerful optimization tool which is widely used to solve a wide range of real-life optimization problems. Some of the widely used PSO variants, including CF-PSO and AW-PSO, cannot guarantee to achieve globally optimal solutions during the period of stagnation when particle velocity variations decline considerably, leading to early convergence. In order to address this problem, this paper proposes an improved PSO algorithm, 'Collaborative Hybrid PSO (CHPSO)'. In the proposed algorithm, the initial swarm is divided into two sub-swarms, one following the Constriction Factor approach and other following Adaptive Weight approach. When the velocity of any of these two sub-swarms goes below the threshold, an information exchange mechanism is utilized and mutation is performed to improve the quality of the solutions. The proposed method is implemented on Matlab and evaluated using five well studied benchmark test functions. Results obtained in this analysis show that the proposed method finds better solutions compared with Constriction Factor PSO (CF-PSO) and Adaptive Weight PSO (AW-PSO), when they work individually. Statistical significance test also shows the robustness of the proposed method compared with CF-PSO and AW-PSO.
2014
A new local search technique is proposed and used to improve the performance of particle swarm optimization algorithms by addressing the problem of premature convergence. In the proposed local search technique, a potential particle position in the solution search space is collectively constructed by a number of randomly selected particles in the swarm. The number of times the selection is made varies with the dimension of the optimization problem and each selected particle donates the value in the location of its randomly selected dimension from its personal best. After constructing the potential particle position, some local search is done around its neighbourhood in comparison with the current swarm global best position. It is then used to replace the global best particle position if it is found to be better; otherwise no replacement is made. Using some well-studied benchmark problems with low and high dimensions, numerical simulations were used to validate the performance of the improved algorithms. Comparisons were made with four different PSO variants, two of the variants implement different local search technique while the other two do not. Results show that the improved algorithms could obtain better quality solution while demonstrating better convergence velocity and precision, stability, robustness, and global-local search ability than the competing variants.
A new optimizer using particle swarm theory
… and Human Science, 1995. MHS'95., …, 1995
The optimization of nonlinear functions using particle swarm methodology is described. Implementations of two paradigms are discussed and compared, including a recently developed locally oriented paradigm. Benchmark testing of both paradigms is described, and applications, including neural network training and robot task learning, are proposed.