Comparative Study of Parallel Variants for a Particle Swarm Optimization (original) (raw)
Related papers
Evaluation of Parallel Particle Swarm Optimization Algorithms within the CUDA Architecture
Particle swarm optimization (PSO), like other population-based meta-heuristics, is intrinsically parallel and can be effectively implemented on Graphics Processing Units (GPUs), which are, in fact, massively parallel processing architectures. In this paper we discuss possible approaches to parallelizing PSO on graphics hardware within the Compute Unified Device Architecture (CUDA™), a GPU programming environment by nVIDIA™ which supports the company’s latest cards. In particular, two different ways of exploiting GPU parallelism are explored and evaluated. The execution speed of the two parallel algorithms is compared, on functions which are typically used as benchmarks for PSO, with a standard sequential implementation of PSO (SPSO), as well as with recently published results of other parallel implementations. An in-depth study of the computation efficiency of our parallel algorithms is carried out by assessing speed-up and scale-up with respect to SPSO. Also reported are some results about the optimization effectiveness of the parallel implementations with respect to SPSO, in cases when the parallel versions introduce some possibly significant difference with respect to the sequential version.
Evaluation of Parallel Particle Swarm Optimization Algorithms within the CUDA(TM) Architecture
Information Sciences, 2010
Particle Swarm Optimization (PSO), as other population-based meta–heuristics, is intrinsically well suited for parallel implementation on Graphic Processing Units (GPUs), which are, in fact, massively parallel processing architectures. In this paper we discuss possible approaches to parallelizing PSO on graphics hardware by means of the Compute Unified Device Architecture (CUDA), a GPU programming environment by nVIDIA which supports its latest cards. In particular, two different ways of exploiting GPU parallelism are explored and evaluated. The execution speed of the parallel algorithms is compared with a standard sequential implementation of PSO (SPSO), as well as with recently-published results of other parallel implementations, on functions which are typically used as benchmarks for PSO. An in-depth study of the computation efficiency of our parallel algorithms is made analyzing speed-up and scale-up with respect to sequential SPSO. Some results about the optimization effectiveness of the parallel implementations with respect to SPSO, in cases when the parallel versions introduce some possibly significant difference with respect to SPSO, are also reported.
Distributed Particle Swarm Optimization using Clusters and GPGPU
This paper presents two coarse-grained parallelization strategies of the Particle Swarm Optimization algorithm using two low-cost parallel architectures: a computer cluster and a General Purpose Graphics Processing Unit (GPGPU). The distinct modifications to the strategies to better suit the architectures main characteristics are shown along success rates, speedup and efficiency for the optimization of Rastrigin and Ackley functions on a 30dimensional search space.
GPU-based parallel particle swarm optimization
2009
Abstract A novel parallel approach to run standard particle swarm optimization (SPSO) on Graphic Processing Unit (GPU) is presented in this paper. By using the general-purpose computing ability of GPU and based on the software platform of Compute Unified Device Architecture (CUDA) from NVIDIA, SPSO can be executed in parallel on GPU. Experiments are conducted by running SPSO both on GPU and CPU, respectively, to optimize four benchmark test functions.
Overview and Applications of Particle Swarm Optimization on GPGPU
International Journal of Computer Applications, 2014
Swarm Optimization is robust and effective method to solve optimization problems. Particle Swarm Optimization takes more time to find optimal solutions for complex real world problems. Execution time required to find optimal solutions depends on nature of problem as well as population and dimension size of the application. Compute intensive problems can be solved efficiently on General Purpose Graphics Processing Unit using Particle Swarm Optimization to diminish processing time. Graphics Processing Unit is used to provide speedup and to find optimal solutions of compute intensive problems earlier than central processing unit. Particle Swarm Optimization has eased to parallelize on Graphics Processing Unit using CUDA. This paper's main contribution is the review of parallelization techniques for Particle Swarm Optimization, performance optimization strategies and brief about different applications solved using Particle Swarm Optimization on GPGPU.
International Journal of Computational Intelligence and Applications, 2013
In this paper, we present a parallel implementation of the particle swarm optimization (PSO) on graphical processing units (GPU) using CUDA. By fully utilizing the processing power of graphic processors, our implementation (CUDA-PSO) provides a speedup of 167Â compared to a sequential implementation on CPU. This speedup is signi¯cantly superior to what has been reported in recent papers and is achieved by four optimizations we made to better adapt the parallel algorithm to the speci¯c architecture of the NVIDIA GPU. However, because today's personal computers are usually equipped with a multicore CPU, it may be unfair to compare our CUDA implementation to a sequential one. For this reason, we implemented a parallel PSO for multicore CPUs using MPI (MPI-PSO) and compared its performance against our CUDA-PSO. The execution time of our CUDA-PSO remains 15.8Â faster than our MPI-PSO which ran on a high-end 12-core workstation. Moreover, we show with statistical signi¯cance that the results obtained using our CUDA-PSO are of equal quality as the results obtained by the sequential PSO or the MPI-PSO. Finally, we use our parallel PSO for real-time harmonic minimization of multilevel power inverters with 20 DC sources while considering the¯rst 100 harmonics and show that our CUDA-PSO is 294Â faster than the sequential PSO and 32.5Â faster than our parallel MPI-PSO.
GPU based parallel cooperative Particle Swarm Optimization using C-CUDA: A case study
2013 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2013
The applications requiring massive computations may get benefit from the Graphics Processing Units (GPUs) with Compute Unified Device Architecture (CUDA) by reducing the execution time. Since the introduction of CUDA, applications from different areas have been benefited. Evolutionary algorithms are one such potential area where CUDA implementation proves to be beneficial not only in terms of the speedups obtained but also the improvement in convergence time. In this paper we present a detailed study of parallel implementation of one of the existing variants of Particle Swarm Optimization which is Cooperative Particle Swarm Optimization (CPSO). We also present a comparative study on CPSO implemented in C and C-CUDA. The algorithm was tested on a set of standard benchmark optimization functions. In this process, some interesting results related to the speedup and improvements in the time in convergence were obtained. The differences in randomizing procedures used in CUDA seem to contribute towards the diversity in population leading to better solution in contrast with the serial implementation. It also provides motivation for further research on neural network architecture and weight optimization using CUDA implementation. The results obtained in this paper therefore re-emphasize the utility of CUDA based implementation for complex and computationally intensive applications.
GPU-based asynchronous particle swarm optimization
… of the 13th annual conference on …, 2011
This paper describes our latest implementation of Particle Swarm Optimization (PSO) with simple ring topology for modern Graphic Processing Units (GPUs). To achieve both the fastest execution time and the best performance, we designed a parallel version of the algorithm, as fine-grained as possible, without introducing explicit synchronization mechanisms among the particles' evolution processes. The results we obtained show a significant speed-up with respect to both the sequential version of the algorithm run on an up-to-date CPU and our previously developed parallel implementation within the nVIDIA TM CUDA TM architecture.
Particle Swarm-based Meta-Optimising on Graphical Processing Units
Modelling, Identification and Control / 801: Advances in Computer Science, 2013
Optimisation (global minimisation or maximisation) of complex, unknown and non-differentiable functions is a difficult problem. One solution for this class of problem is the use of meta-heuristic optimisers. This involves the systematic movement of n-vector solutions through ndimensional parameter space, where each dimension corresponds to a parameter in the function to be optimised. These methods make very little assumptions about the problem. The most advantageous of these is that gradients are not necessary. Population-based methods such as the Particle Swarm Optimiser (PSO) are very effective at solving problems in this domain, as they employ spatial exploration and local solution exploitation in tandem with a stochastic component. Parallel PSOs on Graphical Processing Units (GPUs) allow for much greater system sizes, and a dramatic reduction in compute time. Meta-optimisation presents a further super-optimiser which is used to find appropriate algorithmic parameters for the PSO, however, this practice is often overlooked due to its immense computational expense. We present and discuss a PSO with an overlaid super-optimiser also based on the PSO itself.