On some properties of the lbest topology in particle swarm optimization (original) (raw)
Related papers
A Statistical Study of the Effects of Neighborhood Topologies in Particle Swarm Optimization
Studies in Computational Intelligence, 2011
The behavior of modern meta-heuristics is directed by both, the variation operators, and the values selected for the parameters of the approach. Particle swarm optimization (PSO) is a meta-heuristic which has been found to be very successful in a wide variety of optimization tasks. In PSO, a swarm of particles fly through hyper-dimensional search space being attracted by both, their personal best position and the best position found so far within a neighborhood. In this paper, we perform a statistical study in order to analyze whether the neighborhood topology promotes a convergence acceleration in four PSO-based algorithms: the basic PSO, the Bare-bones PSO, an extension of BBPSO and the Bare-bones Differential Evolution. Our results indicate that the convergence rate of a PSO-based approach has a strongly dependence of the topology used. We also found that the topology most widely used is not necessarily the best topology for every PSO-based algorithm.
Inter-particle communication and search-dynamics of< i> lbest particle swarm optimizers: An analysis
2010
Particle Swarm Optimization (PSO) is arguably one of the most popular nature-inspired algorithms for real parameter optimization at present. The existing theoretical research on PSO focuses on the issues like stability, convergence, and explosion of the swarm. However, all of them are based on the gbest (global best) communication topology, which usually is susceptible to false or premature convergence over multi-modal fitness landscapes.
A Brief Review on Particle Swarm Optimization: Limitations & Future Directions
Particle swarm optimization is a heuristic global optimization method put forward originally by Doctor Kennedy and Eberhart in 1995. Various efforts have been made for solving unimodal and multimodal problems as well as two dimensional to multidimensional problems. Efforts were put towards topology of communication, parameter adjustment, initial distribution of particles and efficient problem solving capabilities. Here we presented detail study of PSO and limitation in present work. Based on the limitation we proposed future direction. I. INTRODUCTION Swarm Intelligence (SI) is an innovative distributed intelligent paradigm for solving optimization problems that originally took its inspiration from the biological examples by swarming, flocking and herding phenomena in vertebrates. Particle Swarm Optimization (PSO) incorporates swarming behaviors observed in flocks of birds, schools of fish, or swarms of bees, and even human social behavior, from which the idea is emerged. PSO is a population-based optimization tool, which could be implemented and applied easily to solve various function optimization problems, or the problems that can be transformed to function optimization problems. As an algorithm, the main strength of PSO is its fast convergence, which compares favorably with many global optimization algorithms like Genetic Algorithms (GA), Simulated Annealing (SA) and other global optimization algorithms. While population-based heuristics are more costly because of their dependency directly upon function values rather than derivative information, they are however susceptible to premature convergence, which is especially the case when there are many decision variables or dimensions to be optimized. Particle swarm optimization is a heuristic global optimization method put forward originally by Doctor Kennedy and Eberhart in 1995. While searching for food, the birds are either scattered or go together before they locate the place where they can find the food. While the birds are searching for food from one place to another, there is always a bird that can smell the food very well, that is, the bird is perceptible of the place where the food can be found, having the better food resource information. Because they are transmitting the information, especially the good information at any time while searching the food from one place to another, conduced by the good information, the birds will eventually flock to the place where food can be found. As far as particle swam optimization algorithm is concerned, solution swam is compared to the bird swarm, the birds' moving from one place to another is equal to the development of the solution swarm, good information is equal to the most optimist solution, and the food resource is equal to the most optimist solution during the whole course. The most optimist solution can be worked out in particle swarm optimization algorithm by the cooperation of each individual. The particle without quality and volume serves as each individual, and the simple behavioral pattern is regulated for each particle to show the complexity of the whole particle swarm. In PSO, the potential solution called particles fly through the problem space by following the current optimum particles. Each particles keeps tracks of its coordinates in the problem space which are associated with the best solution (fitness) achieved so far. This value is called as pbest. Another best value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the neighbors of the particle. This value is called lbest. When a particle takes all the population as its topological neighbors, the best value is a global best and is called gbest. The particle swarm optimization concept consists of, at each time step, changing the velocity of (accelerating) each particle toward its pbest and lbest (for lbest version). Acceleration is weighted by random term, with separate random numbers being generated for acceleration towards pbest and lbest locations. After finding the best values, the particle updates its velocity and positions with following equations.
Pareto Improving Selection of the Global Best in Particle Swarm Optimization
2018 IEEE Congress on Evolutionary Computation (CEC), 2018
Particle Swarm Optimization is an effective stochastic optimization technique that simulates a swarm of particles that fly through a problem space. In the process of searching the problem space for a solution, the individual variables of a candidate solution will often take on inferior values characterized as “Two Steps Forward, One Step Back.” Several approaches to solving this problem have introduced varying notions of cooperation and competition. Instead we characterize the success of these multi-swarm techniques as reconciling conflicting information through a mechanism that makes successive candidates Pareto improvements. We use this analysis to construct a variation of PSO that applies this mechanism to gbest selection. Experiments show that this algorithm performs better than the standard gbest PSO algorithm.
Particle Swarm Convergence: Standardized Analysis and Topological Influence
Lecture Notes in Computer Science, 2014
This paper has two primary aims. Firstly, to empirically verify the use of a specially designed objective function for particle swarm optimization (PSO) convergence analysis. Secondly, to investigate the impact of PSO's social topology on the parameter region needed to ensure convergent particle behavior. At present there exists a large number of theoretical PSO studies, however, all stochastic PSO models contain the stagnation assumption, which implicitly removes the social topology from the model, making this empirical study necessary. It was found that using a specially designed objective function for convergence analysis is both a simple and valid method for convergence analysis. It was also found that the derived region needed to ensure convergent particle behavior remains valid regardless of the selected social topology.
Fitness-distance-ratio based particle swarm optimization
Proceedings of the 2003 IEEE Swarm Intelligence Symposium. SIS'03 (Cat. No.03EX706)
This paper presents a modification of tlte panicle swarm optimization algorithm (PSO) intended to combat the problem ofpremature convergence observed in many applications of PSO. The proposed new algorithm moves particles towards neorby particles of higher fitness, instead of amacting each panicle towards just the best position discovered so far by any particle. This is accomplished by using the ratio of the relative fitness and the distance of other particles to determine the direction in which each component of thepa&leposition needs to be changed. The resulting algorithm (FDR-PSO) is shown to perform significantly better than the original PSO algorithm and some of its variants, on many different benchmark optimization problems. Empiricol examination of the evolution of Ihepam'cles demonstrates tbot the convergence of the algorithm does not occur a1 an early phase of panicle evolution, unlike PSO. Avoiding premature convergence allows FDR-PSO to continue search for global optima in dificult multimodal optimization problems.
A critical assessment of some variants of particle swarm optimization
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 2008
Among the variants of the basic Particle Swarm Optimization (PSO) algorithm as first proposed in 1995, EPSO (Evolutionary PSO), proposed by Miranda and Fonseca, seems to produce significant improvements. We analyze the effects of two modifications introduced in that work (adaptive parameter setting and selection based on an evolution strategies-like approach) separately, reporting results obtained on a set of multimodal benchmark functions, which show that they may have opposite and complementary effects. In particular, using only parameter adaptation when optimizing 'harder' functions yields better results than when both modifications are applied. We also propose a justification for this, based on recent analyses in which particle swarm optimizers are studied as dynamical systems.
A New Taxonomy for Particle Swarm Optimization (PSO)
The Particle Swarm Optimization (PSO) algorithm, as one of the latest algorithms inspired from the nature, was introduced in the mid 1995, and since then has been utilized as a powerful optimization tool in a wide range of applications. In this paper, a general picture of the research in PSO is presented based on a comprehensive survey of about 1800 PSO-related papers published from 1995 to 2008. After a brief introduction to the PSO algorithm, a new taxonomy of PSO-based methods is presented. Also, 95 major PSO-based methods are introduced and their parameters summarized in a comparative table. Finally, a timeline of PSO applications is portrayed which is categorized into 8 main fields.
Biases in Particle Swarm Optimization
International Journal of Swarm …, 2010
It is known that the most common versions of particle swarm optimization (PSO) algorithms are rotationally variant. It has also been pointed out that PSO algorithms can concentrate particles along paths parallel to the coordinate axes. In this paper we explicitly connect these two observations, by showing that the rotational variance is related to the concentration along lines parallel to the coordinate axes. We then clarify the nature of this connection. Based on this explicit connection we create fitness functions that are easy or hard for PSO to solve, depending on the rotation of the function.