Constricted Particle Swarm Optimization based Algorithm for Global Optimization (original) (raw)
Related papers
2014
A new local search technique is proposed and used to improve the performance of particle swarm optimization algorithms by addressing the problem of premature convergence. In the proposed local search technique, a potential particle position in the solution search space is collectively constructed by a number of randomly selected particles in the swarm. The number of times the selection is made varies with the dimension of the optimization problem and each selected particle donates the value in the location of its randomly selected dimension from its personal best. After constructing the potential particle position, some local search is done around its neighbourhood in comparison with the current swarm global best position. It is then used to replace the global best particle position if it is found to be better; otherwise no replacement is made. Using some well-studied benchmark problems with low and high dimensions, numerical simulations were used to validate the performance of the improved algorithms. Comparisons were made with four different PSO variants, two of the variants implement different local search technique while the other two do not. Results show that the improved algorithms could obtain better quality solution while demonstrating better convergence velocity and precision, stability, robustness, and global-local search ability than the competing variants.
Improved particle swarm algorithms for global optimization
Applied Mathematics and Computation, 2008
Particle swarm optimization algorithm has recently gained much attention in the global optimization research community. As a result, a few variants of the algorithm have been suggested. In this paper, we study the efficiency and robustness of a number of particle swarm optimization algorithms and identify the cause for their slow convergence. We then propose some modifications in the position update rule of particle swarm optimization algorithm in order to make the convergence faster. These modifications result in two new versions of the particle swarm optimization algorithm. A numerical study is carried out using a set of 54 test problems some of which are inspired by practical applications. Results show that the new algorithms are much more robust and efficient than some existing particle swarm optimization algorithms. A comparison of the new algorithms with the differential evolution algorithm is also made.
Particle swarm optimization (PSO) is a metaheuristic optimization algorithm that has been used to solve complex optimization problems. The Interior Point Methods (IPMs) are now believed to be the most robust numerical optimization algorithms for solving large-scale nonlinear optimization problems. To overcome the shortcomings of PSO, we proposed the Primal-Dual Asynchronous Particle Swarm Optimization (pdAPSO) algorithm. The Primal Dual provides a better balance between exploration and exploitation, preventing the particles from experiencing premature convergence and been trapped in local minima easily and so producing better results. We compared the performance of pdAPSO with 9 states of the art PSO algorithms using 13 benchmark functions. Our proposed algorithm has very high mean dependability. Also, pdAPSO have a better convergence speed compared to the other 9 algorithms. For instance, on Rosenbrock function, the mean FEs of 8938, 6786, 10,080, 9607, 11,680, 9287, 23,940, 6269 and 6198 are required by PSO-LDIW, CLPSO, pPSA, PSOrank, OLPSO-G, ELPSO, APSO-VI, DNSPSO and MSLPSO respectively to get to the global optima. However, pdAPSO only use 2124 respectively which shows that pdAPSO have the fastest convergence speed. In summary, pdPSO and pdAPSO uses the lowest number of FEs to arrive at acceptable solutions for all the 13 benchmark functions.
Guaranteed Convergence Particle Swarm Optimization using Personal Best
International Journal of Computer Applications, 2013
Particle Swarm Optimization (PSO),is well known technique for population based global search but its limitation to premature convergence before finding the true global minimiser .In this paper We introduce a technique by adding new parameters and a new velocity update formula using personal best value discovered by the swarm particles and decreasing the diameter of search space which prevents premature convergence before finding the true global minimiser. The resulting particle swarmoptimization (PGCPSO) provides a mechanism which is more efficient in finding true global minimizer while it was tested across the benchmark suite .
Globally Convergent Particle Swarm Optimization via Branch-and-Bound
Particle swarm optimization (PSO) is a recently developed optimization method that has attracted interest of researchers in various areas. PSO has been shown to be effective in solving a variety of complex optimization problems. With properly chosen parameters, PSO can converge to local optima. However, conventional PSO does not have global convergence. Empirical evidences indicate that the PSO algorithm may fail to reach global optimal solutions for complex problems. We propose to combine the branch-and-bound framework with the particle swarm optimization algorithm. With this integrated approach, convergence to global optimal solutions is theoretically guaranteed. We have developed and implemented the BB-PSO algorithm that combines the efficiency of PSO and effectiveness of the branch-and-bound method. The BB-PSO method was tested with a set of standard benchmark optimization problems. Experimental results confirm that BB-PSO is effective in finding global optimal solutions to problems that may cause difficulties for the PSO algorithm.
Hierarchical dynamic neighborhood based Particle Swarm Optimization for global optimization
… Computation (CEC), 2011 …, 2011
Particle Swarm Optimization (PSO) is arguably one of the most popular nature-inspired algorithms for real parameter optimization at present. In this article, we introduce a new variant of PSO referred to as Hierarchical D-LPSO (Dynamic Local Neighborhood based Particle Swarm Optimization). In this new variant of PSO the particles are arranged following a dynamic hierarchy. Within each hierarchy the particles search for better solution using dynamically varying sub-swarms i.e. these sub-swarms are regrouped frequently and information is exchanged among them. Whether a particle will move up or down the hierarchy depends on the quality of its so-far bestfound result. The swarm is largely influenced by the good particles that move up in the hierarchy. The performance of Hierarchical D-LPSO is tested on the set of 25 numerical benchmark functions taken from the competition and special session on real parameter optimization held under IEEE Congress on Evolutionary Computation (CEC) 2005. The results have been compared to those obtained with a few best-known variants of PSO as well as a few significant existing evolutionary algorithms.
Particle Swarm Optimization with New Initializing Technique to Solve Global Optimization Problems
Intelligent Automation & Soft Computing
Particle Swarm Optimization (PSO) is a well-known extensively utilized algorithm for a distinct type of optimization problem. In meta-heuristic algorithms, population initialization plays a vital role in solving the classical problems of optimization. The population's initialization in meta-heuristic algorithms urges the convergence rate and diversity, besides this, it is remarkably beneficial for finding the efficient and effective optimal solution. In this study, we proposed an enhanced variation of the PSO algorithm by using a quasi-random sequence (QRS) for population initialization to improve the convergence rate and diversity. Furthermore, this study represents a new approach for population initialization by incorporating the torus sequence with PSO known as TO-PSO. The torus sequence belongs to the family of low discrepancy sequence and it is utilized in the proposed variant of PSO for the initialization of swarm. The proposed strategy of population's initialization has been observed with the fifteen most famous unimodal and multimodal benchmark test problems. The outcomes of our proposed technique display outstanding performance as compared with the traditional PSO, PSO initialized with Sobol Sequence (SO-PSO) and Halton sequence (HO-PSO). The exhaustive experimental results conclude that the proposed algorithm remarkably superior to the other classical approaches. Additionally, the outcomes produced from our proposed work exhibits anticipation that how immensely the proposed approach highly influences the value of cost function, convergence rate, and diversity.
Global and Local Neighborhood Based Particle Swarm Optimization
Harmony Search and Nature Inspired Optimization Algorithms, 2018
The particle swarm optimization (PSO) is one of the popular and simple to implement swarm intelligence based algorithms. To some extent, PSO dominates other optimization algorithms but prematurely converging to local optima and stagnation in later generations are some pitfalls. The reason for these problems is the unbalancing of the diversification and convergence abilities of the population during the solution search process. In this paper, a novel position update process is developed and incorporated in PSO by adopting the concept of the neighborhood topologies for each particle. Statistical analysis over 15 complex benchmark functions shows that performance of propounded PSO version is much better than standard PSO (PSO 2011) algorithm while maintaining the cost-effectiveness in terms of function evaluations.
Particle Swarm Optimisation: A Historical Review Up to the Current Developments
Entropy, 2020
The Particle Swarm Optimisation (PSO) algorithm was inspired by the social and biological behaviour of bird flocks searching for food sources. In this nature-based algorithm, individuals are referred to as particles and fly through the search space seeking for the global best position that minimises (or maximises) a given problem. Today, PSO is one of the most well-known and widely used swarm intelligence algorithms and metaheuristic techniques, because of its simplicity and ability to be used in a wide range of applications. However, in-depth studies of the algorithm have led to the detection and identification of a number of problems with it, especially convergence problems and performance issues. Consequently, a myriad of variants, enhancements and extensions to the original version of the algorithm, developed and introduced in the mid-1990s, have been proposed, especially in the last two decades. In this article, a systematic literature review about those variants and improvemen...