An Improvement of Particle Swarm Optimization with A Neighborhood Search Algorithm (original) (raw)
Related papers
2014
A new local search technique is proposed and used to improve the performance of particle swarm optimization algorithms by addressing the problem of premature convergence. In the proposed local search technique, a potential particle position in the solution search space is collectively constructed by a number of randomly selected particles in the swarm. The number of times the selection is made varies with the dimension of the optimization problem and each selected particle donates the value in the location of its randomly selected dimension from its personal best. After constructing the potential particle position, some local search is done around its neighbourhood in comparison with the current swarm global best position. It is then used to replace the global best particle position if it is found to be better; otherwise no replacement is made. Using some well-studied benchmark problems with low and high dimensions, numerical simulations were used to validate the performance of the improved algorithms. Comparisons were made with four different PSO variants, two of the variants implement different local search technique while the other two do not. Results show that the improved algorithms could obtain better quality solution while demonstrating better convergence velocity and precision, stability, robustness, and global-local search ability than the competing variants.
Global and Local Neighborhood Based Particle Swarm Optimization
Harmony Search and Nature Inspired Optimization Algorithms, 2018
The particle swarm optimization (PSO) is one of the popular and simple to implement swarm intelligence based algorithms. To some extent, PSO dominates other optimization algorithms but prematurely converging to local optima and stagnation in later generations are some pitfalls. The reason for these problems is the unbalancing of the diversification and convergence abilities of the population during the solution search process. In this paper, a novel position update process is developed and incorporated in PSO by adopting the concept of the neighborhood topologies for each particle. Statistical analysis over 15 complex benchmark functions shows that performance of propounded PSO version is much better than standard PSO (PSO 2011) algorithm while maintaining the cost-effectiveness in terms of function evaluations.
A New Particle Swarm Optimization Technique
In this paper, a new particle swarm optimization method (NPSO) is proposed. It is compared with the regular particle swarm optimizer (PSO) invented by Kennedy and Eberhart in 1995 based on four different benchmark functions. PSO is motivated by the social behavior of organisms, such as bird flocking and fish schooling. Each particle studies its own previous best solution to the optimization problem, and its group's previous best, and then adjusts its position (solution) accordingly. The optimal value will be found by repeating this process. In the NPSO proposed here, each particle adjusts its position according to its own previous worst solution and its group's previous worst to find the optimal value. The strategy here is to avoid a particle's previous worst solution and its group's previous worst based on similar formulae of the regular PSO. Under all test cases, simulation shows that the NPSO always finds better solutions than PSO.
A new optimizer using particle swarm theory
… and Human Science, 1995. MHS'95., …, 1995
The optimization of nonlinear functions using particle swarm methodology is described. Implementations of two paradigms are discussed and compared, including a recently developed locally oriented paradigm. Benchmark testing of both paradigms is described, and applications, including neural network training and robot task learning, are proposed.
Mathematical Modelling and Applications of Particle Swarm Optimization
Optimization is a mathematical technique that concerns the finding of maxima or minima of functions in some feasible region. There is no business or industry which is not involved in solving optimization problems. A variety of optimization techniques compete for the best solution. Particle Swarm Optimization (PSO) is a relatively new, modern, and powerful method of optimization that has been empirically shown to perform well on many of these optimization problems. It is widely used to find the global optimum solution in a complex search space. This thesis aims at providing a review and discussion of the most established results on PSO algorithm as well as exposing the most active research topics that can give initiative for future work and help the practitioner improve better result with little effort. This paper introduces a theoretical idea and detailed explanation of the PSO algorithm, the advantages and disadvantages, the effects and judicious selection of the various parameters. Moreover, this thesis discusses a study of boundary conditions with the invisible wall technique, controlling the convergence behaviors of PSO, discrete-valued problems, multi-objective PSO, and applications of PSO. Finally, this paper presents some kinds of improved versions as well as recent progress in the development of the PSO, and the future research issues are also given.
Neural Networks, 1995. Proceedings., …, 1995
A concept for the optimization of nonlinear functions using particle swarm methodology is introduced. The evolution of several paradigms is outlined, and an implementation of one of the paradigms is discussed. Benchmark testing of the paradigm is described, and applications, including nonlinear function optimization and neural network training, are proposed. The relationships between particle swarm optimization and both artificial life and genetic algorithms are described,
Particle Swarm Optimization for Solving Nonlinear Programming Problems
In the beginning we provide a brief introduction to the basic concepts of optimization and global optimization, evolutionary computation and swarm intelligence. The necessity of solving optimization problems is outlined and various types of optimization problems are discussed. A rough classification of established optimization algorithms is provided, followed by Particle Swarm Optimization (PSO) and different types of PSO. Change in velocity component using velocity clamping techniques by bisection method and golden search method are discussed. We have discussed advantages of Using Self-Accelerated Smart Particle Swarm Optimization (SAS-PSO) technique which was introduced . Finally, the numerical values of the objective function are calculated which are optimal solution for the problem. The SAS-PSO and Standard Particle Swarm Optimization technique is compared as a result SAS-PSO does not require any additional parameter like acceleration co-efficient and inertia-weight as in case of other standard PSO algorithms.
(2006) Particle Swarm Optimization: Development of a General-Purpose Optimizer
For problems where the quality of any solution can be quantified in a numerical value, optimization is the process of finding the permitted combination of variables in the problem that optimizes that value. Traditional methods present a very restrictive range of applications, mainly limited by the features of the function to be optimized and of the constraint functions. In contrast, evolutionary algorithms present almost no restriction to the features of these functions, although the most appropriate constraint-handling technique is still an open question. The particle swarm optimization (PSO) method is sometimes viewed as another evolutionary algorithm because of their many similarities, despite not being inspired by the same metaphor. Namely, they evolve a population of individuals taking into consideration previous experiences and using stochastic operators to introduce new responses. The advantages of evolutionary algorithms with respect to traditional methods have been greatly discussed in the literature for decades. While all such advantages are valid when comparing the PSO paradigm to traditional methods, its main advantages with respect to evolutionary algorithms consist of its noticeably lower computational cost and easier implementation. In fact, the plain version can be programmed in a few lines of code, involving no operator design and few parameters to be tuned. This paper deals with three important aspects of the method: the influence of the parameters’ tuning on the behaviour of the system; the design of stopping criteria so that the reliability of the solution found can be somehow estimated and computational cost can be saved; and the development of appropriate techniques to handle constraints, given that the original method is designed for unconstrained optimization problems.
Feedback loop mechanisms based particle swarm optimization with neighborhood topology
2011 IEEE Congress of Evolutionary Computation (CEC), 2011
Particle swarm optimization (PSO) is an optimization approach and has been widely used for verity of optimization problems in both research and industrial domains. Due to the potential of PSO, several variants of the original PSO algorithms have been developed to improve PSO's efficiency and robustness. This paper proposes two variants of a particle swarm optimization algorithm, called PidSO and N-PidSO. First, the PidSO algorithm is a classic feedback control theory-based PSO algorithm. A classic feedback controller is an approach to manipulate the performance and stability of particles. Second, the N-PidSO algorithm is a topological neighborhood-based PidSO, which offers better search efficiency and convergence stability PidSO for multimodal optimization problems. Our PidSO and N-PidSO methods combine faster response from the proportional term, eliminate the steady-state error that occurs in a particle movement from the integral term, and improve controller-process stability from the derivative term. As a result, the PidSO and N-PidSO methods feature faster searching from the proportional term without steady-state error. And empirical results show that our algorithms are able to achieve high performance for both unimodal and multimodal optimization problems. Empirical examinations were conducted so that the convergence of the PidSO algorithm was stepwise, with quicker convergence in unimodal optimization problems than other existing algorithms, and N-PidSO demonstrated better performance for multimodal optimization problems than PidSO and another five variants of PSO.