Stochastic convergence analysis and parameter selection of the standard particle swarm optimization algorithm (original) (raw)
Related papers
A Theoretical and Empirical Analysis of Convergence Related Particle Swarm Optimization
2009
In this paper an extensive theoretical and empirical analysis of recently introduced Particle Swarm Optimization algorithm with Convergence Related parameters (CR-PSO) is presented. The convergence of the classical PSO algorithm is addressed in detail. The conditions that should be imposed on parameters of the algorithm in order for it to converge in mean-square have been derived. The practical implications of these conditions have been discussed. Based on these implications a novel, recently proposed parameterization scheme for the PSO has been introduced. The novel optimizer is tested on an extended set of benchmarks and the results are compared to the PSO with time-varying acceleration coefficients (TVAC-PSO) and the standard genetic algorithm (GA).
An empirical analysis of convergence related particle swarm optimization
2009
In this paper an extensive empirical analysis of recently introduced Particle Swarm Optimization algorithm with Convergence Related parameters (CR-PSO) is presented. The algorithm is tested on extended set of benchmarks and the results are compared to the PSO with time-varying acceleration coefficients (TVAC-PSO) and the standard genetic algorithm (GA).
Journal of Global Optimization, 2010
In this paper we consider the evolutionary Particle Swarm Optimization (PSO) algorithm, for the minimization of a computationally costly nonlinear function, in global optimization frameworks. We study a reformulation of the standard iteration of PSO (Clerc and Kennedy in IEEE Trans Evol Comput 6(1) 2002), (Kennedy and Eberhart in IEEE Service Center, Piscataway, IV: 1942–1948, 1995) into a linear dynamic system. We carry out our analysis on a generalized PSO iteration, which includes the standard one proposed in the literature. We analyze three issues for the resulting generalized PSO: first, for any particle we give both theoretical and numerical evidence on an efficient choice of the starting point. Then, we study the cases in which either deterministic and uniformly randomly distributed coefficients are considered in the scheme. Finally, some convergence analysis is also provided, along with some necessary conditions to avoid diverging trajectories. The results proved in the pape...
In this paper we consider the evolutionary Particle Swarm Optimization (PSO) algorithm, for the minimization of a computationally costly nonlinear function, in global optimization frameworks. We study a reformulation of the standard iteration of PSO (Clerc and Kennedy in IEEE Trans Evol Comput 6(1) 2002), (Kennedy and Eberhart in IEEE Service Center, Piscataway, IV: 1942IV: -1948IV: , 1995 into a linear dynamic system. We carry out our analysis on a generalized PSO iteration, which includes the standard one proposed in the literature. We analyze three issues for the resulting generalized PSO: first, for any particle we give both theoretical and numerical evidence on an efficient choice of the starting point. Then, we study the cases in which either deterministic and uniformly randomly distributed coefficients are considered in the scheme. Finally, some convergence analysis is also provided, along with some necessary conditions to avoid diverging trajectories. The results proved in the paper can be immediately applied to the standard PSO iteration.
Overview of Particle Swarm Optimization ( PSO ) on its Applications and Methods
2013
Particle Swarm Optimization (PSO) that is famous as a heuristic robust stochastic optimization technique works in field of Artificial Intelligence (AI). This technique of optimization is inspired by certain behaviors of animals such as bird flocking. The base of PSO method is on swarm intelligence that has a huge effect on solving problem in social communication. Hence, the PSO is a useful and valuable technique with goal of maximizing or minimizing of certain value that has been used in wide area and different fields such as large field of engineering, physics, mathematics, chemistry and etc. in this paper, following a brief introduction to the PSO algorithm, the method of that is presented and it’s important factors and parameters are summarized. The main aim of this paper is to overview, discuss of the available literature of the PSO algorithm yearly.
A Review on Convergence Analysis of Particle Swarm Optimization
International Journal of Swarm Intelligence Research
Particle swarm optimization (PSO) is one of the popular nature-inspired metaheuristic algorithms. It has been used in different applications. The convergence analysis is among the key theoretical studies in PSO. This paper discusses major contributions in the convergence analysis of PSO. A systematic classification will be used for the review purpose. Possible future works are also highlighted as to investigate the performance of PSO variants to deal with COPs through theoretical perspective and general discussions on experimental results on merits of the proposed approach.
Particle Swarm Convergence: Standardized Analysis and Topological Influence
Lecture Notes in Computer Science, 2014
This paper has two primary aims. Firstly, to empirically verify the use of a specially designed objective function for particle swarm optimization (PSO) convergence analysis. Secondly, to investigate the impact of PSO's social topology on the parameter region needed to ensure convergent particle behavior. At present there exists a large number of theoretical PSO studies, however, all stochastic PSO models contain the stagnation assumption, which implicitly removes the social topology from the model, making this empirical study necessary. It was found that using a specially designed objective function for convergence analysis is both a simple and valid method for convergence analysis. It was also found that the derived region needed to ensure convergent particle behavior remains valid regardless of the selected social topology.
The Importance of Component-Wise Stochasticity in Particle Swarm Optimization
Lecture Notes in Computer Science, 2018
This paper illustrates the importance of independent, component-wise stochastic scaling values, from both a theoretical and empirical perspective. It is shown that a swarm employing scalar stochasticity is unable to express every point in the search space if the problem dimensionality is sufficiently large in comparison to the swarm size. The theoretical result is emphasized by an empirical experiment, comparing the performance of a scalar swarm on benchmarks with reachable and unreachable optima. It is shown that a swarm using scalar stochasticity performs significantly worse when the optimum is not in the span of its initial positions. Lastly, it is demonstrated that a scalar swarm performs significantly worse than a swarm with component-wise stochasticity on a large range of benchmark functions, even when the problem dimensionality allows the scalar swarm to reach the optima.
Particle swarm optimization using dimension selection methods
Applied Mathematics and Computation, 2013
Particle swarm optimization (PSO) has undergone many changes since its introduction in 1995. Being a stochastic algorithm, PSO and its randomness present formidable challenge for the theoretical analysis of it, and few of the existing PSO improvements have make an effort to eliminate the random coefficients in the PSO updating formula. This paper analyzes the importance of the randomness in the PSO, and then gives a PSO variant without randomness to show that traditional PSO cannot work without randomness. Based on our analysis of the randomness, another way of using randomness is proposed in PSO with random dimension selection (PSORDS) algorithm, which utilizes random dimension selection instead of stochastic coefficients. Finally, deterministic methods to do the dimension selection are proposed, and the resultant PSO with distance based dimension selection (PSODDS) algorithm is greatly superior to the traditional PSO and PSO with heuristic dimension selection (PSOHDS) algorithm is comparable to traditional PSO algorithm. In addition, using our dimension selection method to a newly proposed modified particle swarm optimization (MPSO) algorithm also gets improved results. The experiment results demonstrate that our analysis about the randomness is correct and the usage of deterministic dimension selection method is very helpful.
Estimation of parameters in particle swarm optimization
Journal of Mathematical and Computational Science, 2021
Nowadays the heuristic techniques are becoming one of the most useful tools in solving optimization problems. One of these techniques is Particle Swarm Optimization, PSO algorithm. From the numerical analysis perspective this is a successful method, but many issues are still to be considered regarding the convergence of algorithm. In this paper we deal with the problem of the evaluation of the parameters of the algorithm that assure its convergence. In the previous work we presented some restriction on the parameters of the perturbated dynamical system, that modeled the PSO algorithm. These restrictions are necessary to guarantee the stability of the system. In this paper we present some other restrictions needed to ensure the stability of the system and to advance in the research of the convergence of PSO.