Fractional particle swarm optimization in multidimensional search space (original) (raw)

2010

https://doi.org/10.1109/TSMCB.2009.2015054

Sign up for access to the world's latest research

checkGet notified about relevant papers

checkSave papers to use in your research

checkJoin the discussion with peers

checkTrack your impact

Abstract

Abstract In this paper, we propose two novel techniques, which successfully address several major problems in the field of particle swarm optimization (PSO) and promise a significant breakthrough over complex multimodal optimization problems at high dimensions. The first one, which is the so-called multidimensional (MD) PSO, re-forms the native structure of swarm particles in such a way that they can make interdimensional passes with a dedicated dimensional PSO process.

Fractional Particle Swarm Optimization

Mathematical Methods in Engineering, 2014

The paper addresses new perspective of the PSO including a fractional block. The local gain is replaced by one of fractional order considering several previous positions of the PSO particles. The algorithm is evaluated for several well known test functions and the relationship between the fractional order and the convergence of the algorithm is observed. The fractional order influences directly the algorithm convergence rate demonstrating a large potencial for developments.

Multi-dimensional Search via Fractional Multi-swarms in Dynamic Environments

Particle swarm optimization (PSO) was proposed as an optimization technique for static environments; however, many real problems are dynamic, meaning that the environment and the characteristics of the global optimum can change in time. In this paper, we adapt recent techniques, which successfully address several major problems of PSO and exhibit a significant performance over multi-modal and non-stationary environments. In order to address the pre-mature convergence problem and improve the rate of PSO's convergence to the global optimum, Fractional Global Best Formation (FGBF) technique is used. FGBF basically collects all the best dimensional components and fractionally creates an artificial Global Best particle (aGB) that has the potential to be a better ''guide" than the PSO's native gbest particle. To establish follow-up of local optima, we then introduce a novel multi-swarm algorithm, which enables each swarm to converge to a different optimum and use FGBF technique distinctively. Finally for the multidimensional dynamic environments where the optimum dimension also changes in time, we utilize a recent PSO technique, the multi-dimensional (MD) PSO, which re-forms the native structure of the swarm particles in such a way that they can make inter-dimensional passes with a dedicated dimensional PSO process. Therefore, in a multi-dimensional search space where the optimum dimension is unknown, swarm particles can seek for both positional and dimensional optima. This eventually pushes the frontier of the optimization problems in dynamic environments towards a global search in a multi-dimensional space, where there exists a multi-modal problem possibly in each dimension. We investigated both standalone and mutual applications of the proposed methods over the moving peaks benchmark (MPB), which originally simulates a dynamic environment in a unique (fixed) dimension. MPB is appropriately extended to accomplish the simulation of a multi-dimensional dynamic system, which contains dynamic environments active in several dimensions. An extensive set of experiments show that in traditional MPB application domain, FGBF technique applied with multi-swarms exhibits an impressive speed gain and tracks the global peak with the minimum error so far achieved with respect to the other competitive PSO-based methods. When applied over the extended MPB, MD PSO with FGBF can find optimum dimension and provide the (near-) optimal solution in this dimension. (J. Pulkkinen), moncef.gabbouj@tut.fi (M. Gabbouj).

Dynamic multi-swarm particle swarm optimization with fractional global best formation

2010

Abstract Particle swarm optimization (PSO) has been initially proposed as an optimization technique for static environments; however, many real problems are dynamic, meaning that the environment and the characteristics of the global optimum can change over time. Thanks to its stochastic and population based nature, PSO can avoid being trapped in local optima and find the global optimum.

Particle swarm optimization with fractional-order velocity

Nonlinear Dynamics, 2010

This paper proposes a novel method for controlling the convergence rate of a particle swarm optimization algorithm using fractional calculus (FC) concepts. The optimization is tested for several wellknown functions and the relationship between the fractional order velocity and the convergence of the algorithm is observed. The FC demonstrates a potential for interpreting evolution of the algorithm and to control its convergence.

Fractional Order Darwinian Particle Swarm Optimization

The Darwinian Particle Swarm Optimization (DPSO) is an evolutionary algorithm that extends the Particle Swarm Optimization (PSO) using natural selection, or survival-of-the-fittest, to enhance the ability to escape from local optima. This paper presents a method for controlling the convergence rate of the DPSO using fractional calculus (FC) concepts. The fractional order (FO) DPSO, denoted as FO-DPSO, is tested using several well-known functions and the relationship between the fractional order velocity and the convergence of the algorithm is observed.

Two-layer particle swarm optimization with intelligent division of labor

Engineering Applications of Artificial Intelligence, 2013

Early studies in particle swarm optimization (PSO) algorithm reveal that the social and cognitive components of swarm, i.e. memory swarm, tend to distribute around the problem's optima. Motivated by these findings, we propose a two-layer PSO with intelligent division of labor (TLPSO-IDL) that aims to improve the search capabilities of PSO through the evolution memory swarm. The evolution in TLPSO-IDL is performed sequentially on both the current swarm and the memory swarm. A new learning mechanism is proposed in the former to enhance the swarm's exploration capability, whilst an intelligent division of labor (IDL) module is developed in the latter to adaptively divide the swarm into the exploration and exploitation sections. The proposed TLPSO-IDOL algorithm is thoroughly compared with nine well-establish PSO variants on 16 unimodal and multimodal benchmark problems with or without rotation property. Simulation results indicate that the searching capabilities and the convergence speed of TLPSO-IDL are superior to the state-of-art PSO variants.

A Brief Review on Particle Swarm Optimization: Limitations & Future Directions

Particle swarm optimization is a heuristic global optimization method put forward originally by Doctor Kennedy and Eberhart in 1995. Various efforts have been made for solving unimodal and multimodal problems as well as two dimensional to multidimensional problems. Efforts were put towards topology of communication, parameter adjustment, initial distribution of particles and efficient problem solving capabilities. Here we presented detail study of PSO and limitation in present work. Based on the limitation we proposed future direction. I. INTRODUCTION Swarm Intelligence (SI) is an innovative distributed intelligent paradigm for solving optimization problems that originally took its inspiration from the biological examples by swarming, flocking and herding phenomena in vertebrates. Particle Swarm Optimization (PSO) incorporates swarming behaviors observed in flocks of birds, schools of fish, or swarms of bees, and even human social behavior, from which the idea is emerged. PSO is a population-based optimization tool, which could be implemented and applied easily to solve various function optimization problems, or the problems that can be transformed to function optimization problems. As an algorithm, the main strength of PSO is its fast convergence, which compares favorably with many global optimization algorithms like Genetic Algorithms (GA), Simulated Annealing (SA) and other global optimization algorithms. While population-based heuristics are more costly because of their dependency directly upon function values rather than derivative information, they are however susceptible to premature convergence, which is especially the case when there are many decision variables or dimensions to be optimized. Particle swarm optimization is a heuristic global optimization method put forward originally by Doctor Kennedy and Eberhart in 1995. While searching for food, the birds are either scattered or go together before they locate the place where they can find the food. While the birds are searching for food from one place to another, there is always a bird that can smell the food very well, that is, the bird is perceptible of the place where the food can be found, having the better food resource information. Because they are transmitting the information, especially the good information at any time while searching the food from one place to another, conduced by the good information, the birds will eventually flock to the place where food can be found. As far as particle swam optimization algorithm is concerned, solution swam is compared to the bird swarm, the birds' moving from one place to another is equal to the development of the solution swarm, good information is equal to the most optimist solution, and the food resource is equal to the most optimist solution during the whole course. The most optimist solution can be worked out in particle swarm optimization algorithm by the cooperation of each individual. The particle without quality and volume serves as each individual, and the simple behavioral pattern is regulated for each particle to show the complexity of the whole particle swarm. In PSO, the potential solution called particles fly through the problem space by following the current optimum particles. Each particles keeps tracks of its coordinates in the problem space which are associated with the best solution (fitness) achieved so far. This value is called as pbest. Another best value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the neighbors of the particle. This value is called lbest. When a particle takes all the population as its topological neighbors, the best value is a global best and is called gbest. The particle swarm optimization concept consists of, at each time step, changing the velocity of (accelerating) each particle toward its pbest and lbest (for lbest version). Acceleration is weighted by random term, with separate random numbers being generated for acceleration towards pbest and lbest locations. After finding the best values, the particle updates its velocity and positions with following equations.

Particle Swarms for Multimodal Optimization

Lecture Notes in Computer Science, 2007

In this paper, five previous Particle Swarm Optimization (PSO) algorithms for multimodal function optimization are reviewed. A new and a successful PSO based algorithm, named as CPSO is proposed. CPSO enhances the exploration and exploitation capabilities of PSO by performing search using a random walk and a hill climbing components. Furthermore, one of the previous PSO approaches is improved incredibly by means of a minor adjustment. All algorithms are compared over a set of well-known benchmark functions.

Multi-ring Particle Swarm Optimization

2008

Particle swarm optimization (PSO) has been used to solve many different types of optimization problems. By applying PSO to problems where the feasible solutions are too much difficult to find, new ways of solving the problems are required, mainly for hyper dimensional spaces. Many variations on the basic PSO form have been explored, targeting the velocity update equation. Other approaches attempt to change the structure of the swarm. In this paper a novel PSO topology based on multiples rings is proposed for improving the results achieved focusing on the diversity provided by the ring rotations. A comparison with star and ring topologies was performed. Our simulation results have shown that the proposed topology achieves better results than the well known star and ring topologies.

Compound particle swarm optimization in dynamic environments

2008

Adaptation to dynamic optimization problems is currently receiving a growing interest as one of the most important applications of evolutionary algorithms. In this paper, a compound particle swarm optimization (CPSO) is proposed as a new variant of particle swarm optimization to enhance its performance in dynamic environments. Within CPSO, compound particles are constructed as a novel type of particles in the search space and their motions are integrated into the swarm. A special reflection scheme is introduced in order to explore the search space more comprehensively. Furthermore, some information preserving and anti-convergence strategies are also developed to improve the performance of CPSO in a new environment. An experimental study shows the efficiency of CPSO in dynamic environments.

Cited by

Perceptual Dominant Color Extraction by Multidimensional Particle Swarm Optimization

EURASIP Journal on Advances in Signal Processing, 2009

Color is the major source of information widely used in image analysis and content-based retrieval. Extracting dominant colors that are prominent in a visual scenery is of utmost importance since the human visual system primarily uses them for perception and similarity judgment. In this paper, we address dominant color extraction as a dynamic clustering problem and use techniques based on Particle Swarm Optimization (PSO) for finding optimal (number of) dominant colors in a given color space, distance metric and a proper validity index function. The first technique, so-called Multidimensional (MD) PSO can seek both positional and dimensional optima. Nevertheless, MD PSO is still susceptible to premature convergence due to lack of divergence. To address this problem we then apply Fractional Global Best Formation (FGBF) technique. In order to extract perceptually important colors and to further improve the discrimination factor for a better clustering performance, an efficient color distance metric, which uses a fuzzy model for computing color (dis-) similarities over HSV (or HSL) color space is proposed. The comparative evaluations against MPEG-7 dominant color descriptor show the superiority of the proposed technique.

An evolutionary feature synthesis approach for content-based audio retrieval

EURASIP Journal on Audio, Speech, and Music Processing, 2012

A vast amount of audio features have been proposed in the literature to characterize the content of audio signals. In order to overcome specific problems related to the existing features (such as lack of discriminative power), as well as to reduce the need for manual feature selection, in this article, we propose an evolutionary feature synthesis technique with a built-in feature selection scheme. The proposed synthesis process searches for optimal linear/ nonlinear operators and feature weights from a pre-defined multi-dimensional search space to generate a highly discriminative set of new (artificial) features. The evolutionary search process is based on a stochastic optimization approach in which a multi-dimensional particle swarm optimization algorithm, along with fractional global best formation and heterogeneous particle behavior techniques, is applied. Unlike many existing feature generation approaches, the dimensionality of the synthesized feature vector is also searched and optimized within a set range in order to better meet the varying requirements set by many practical applications and classifiers. The new features generated by the proposed synthesis approach are compared with typical low-level audio features in several classification and retrieval tasks. The results demonstrate a clear improvement of up to 15-20% in average retrieval performance. Moreover, the proposed synthesis technique surpasses the synthesis performance of evolutionary artificial neural networks, exhibiting a considerable capability to accurately distinguish among different audio classes.

A Generic and Patient-Specific Electrocardiogram Signal Classification System

ECG Signal Processing, Classification and Interpretation, 2011

Each individual heartbeat in the cardiac cycle of the recorded electrocardiogram (ECG) waveform shows the time evolution of the heart's electrical activity, which is made of distinct electrical depolarization–repolarization patterns of the heart. Any disorder of heart rate or rhythm, or change in the morphological pattern is an indication of an arrhythmia, which could be detected by analysis of the recorded ECG waveform. Real-time automated ECG analysis in clinical settings is of great assistance to clinicians in detecting cardiac arrhythmias, ...

Fractional-order quantum particle swarm optimization

PLOS ONE, 2019

Motivated by the concepts of quantum mechanics and particle swarm optimization (PSO), quantum-behaved particle swarm optimization (QPSO) was developed to achieve better global search ability. This paper proposes a new method to improve the global search ability of QPSO with fractional calculus (FC). Based on one of the most frequently used fractional differential definitions, the Grü nwald-Letnikov definition, we introduce its discrete expression into the position updating of QPSO. Extensive experiments on well-known benchmark functions were performed to evaluate the performance of the proposed fractional-order quantum particle swarm optimization (FQPSO). The experimental results demonstrate its superior ability in achieving optimal solutions for several different optimizations.

Mathematical Optimization by Using Particle Swarm Optimization, Genetic Algorithm, and Differential Evolution and Its Similarities

Advances in computational intelligence and robotics book series, 2017

To solve the problems of optimization, various methods are provided in different domain. Evolutionary computing (EC) is one of the methods to solve these problems. Mostly used EC techniques are available like Particle Swarm Optimization (PSO), Genetic Algorithm (GA) and Differential Evolution (DE). These techniques have different working structure but the inner working structure is same. Different names and formulae are given for different task but ultimately all do the same. Here we tried to find out the similarities among these techniques and give the working structure in each step. All the steps are provided with proper example and code written in MATLAB, for better understanding. Here we started our discussion with introduction about optimization and solution to optimization problems by PSO, GA and DE. Finally, we have given brief comparison of these.

Fractional dynamics in particle swarm optimization

Systems, Man and …, 2007

This paper studies the fractional dynamics during the evolution of a Particle Swarm Optimization (PSO). Some swarm particles of the initial population are randomly changed for stimulating the system response. After the result is compared with a reference situation. The perturbation effect in the PSO evolution is observed in the perspective of the time behavior of the fitness of the best individual position visited by the replaced particles. The dynamics is investigated through the median of a sample of experiments, while adopting the Fourier analysis for describing the phenomena. The influence of the PSO parameters upon the global dynamics is also analyzed by performing several experiments for distinct values.

Multi-dimensional particle swarm optimization in dynamic environments

2011

Particle swarm optimization (PSO) was proposed as an optimization technique for static environments; however, many real problems are dynamic, meaning that the environment and the characteristics of the global optimum can change in time. In this paper, we adapt recent techniques, which successfully address several major problems of PSO and exhibit a significant performance over multi-modal and non-stationary environments.

The perils of particle swarm optimization in high dimensional problem spaces

2005

Particle swarm optimisation (PSO) is a stochastic, population-based optimisation algorithm. PSO has been applied successfully to a variety of domains. This thesis examines the behaviour of PSO when applied to high dimensional optimisation problems. Empirical experiments are used to illustrate the problems exhibited by the swarm, namely that the particles are prone to leaving the search space and never returning. This thesis does not intend to develop a new version of PSO specifically for high dimensional problems. Instead, the thesis investigates why PSO fails in high dimensional search spaces. Four different types of approaches are examined. The first is the application of velocity clamping to prevent the initial velocity explosion and to keep particles inside the search space. The second approach selects values for the acceleration coefficients and inertia weights so that particle movement is restrained or so that the swarm follows particular patterns of movement. The third introd...

An Adaptive Velocity Particle Swarm Optimization for high-dimensional function optimization

2013 IEEE Congress on Evolutionary Computation, 2013

Researchers have achieved varying levels of successes in proposing different methods to modify the particle's velocity updating formula for better performance of Particle Swarm Optimization (PSO). Variants of PSO that solved highdimensional optimization problems up to 1,000 dimensions without losing superiority to its competitor(s) are rare. Meanwhile, high-dimensional real-world optimization problems are becoming realities hence PSO algorithm therefore needs some reworking to enhance it for better performance in handling such problems. This paper proposes a new PSO variant called Adaptive Velocity PSO (AV-PSO), which adaptively adjusts the velocity of particles based on Euclidean distance between the position of each particle and the position of the global best particle. To avoid getting trapped in local optimal, chaotic characteristics was introduced into the particle position updating formula. In all experiments, it is shown that AV-PSO is very efficient for solving low and high-dimensional global optimization problems. Empirical results show that AV-PSO outperformed AIWPSO, PSO rank , CRIW-PSO, def-PSO, e1-PSO and APSO. It also performed better than LSRS in many of the tested high-dimensional problems. AV-PSO was also used to optimize some high-dimensional problems with 4,000 dimensions with very good results.

Particle swarm optimization using dimension selection methods

Applied Mathematics and Computation, 2013

Particle swarm optimization (PSO) has undergone many changes since its introduction in 1995. Being a stochastic algorithm, PSO and its randomness present formidable challenge for the theoretical analysis of it, and few of the existing PSO improvements have make an effort to eliminate the random coefficients in the PSO updating formula. This paper analyzes the importance of the randomness in the PSO, and then gives a PSO variant without randomness to show that traditional PSO cannot work without randomness. Based on our analysis of the randomness, another way of using randomness is proposed in PSO with random dimension selection (PSORDS) algorithm, which utilizes random dimension selection instead of stochastic coefficients. Finally, deterministic methods to do the dimension selection are proposed, and the resultant PSO with distance based dimension selection (PSODDS) algorithm is greatly superior to the traditional PSO and PSO with heuristic dimension selection (PSOHDS) algorithm is comparable to traditional PSO algorithm. In addition, using our dimension selection method to a newly proposed modified particle swarm optimization (MPSO) algorithm also gets improved results. The experiment results demonstrate that our analysis about the randomness is correct and the usage of deterministic dimension selection method is very helpful.

Diversity enhanced particle swarm optimization with neighborhood search

Information Sciences, 2013

Particle swarm optimization (PSO) has shown an effective performance for solving variant benchmark and real-world optimization problems. However, it suffers from premature convergence because of quick losing of diversity. In order to enhance its performance, this paper proposes a hybrid PSO algorithm, called DNSPSO, which employs a diversity enhancing mechanism and neighborhood search strategies to achieve a trade-off between exploration and exploitation abilities. A comprehensive experimental study is conducted on a set of benchmark functions, including rotated multimodal and shifted high-dimensional problems. Comparison results show that DNSPSO obtains a promising performance on the majority of the test problems.

Multidimensional Particle Swarm Optimization for Machine Learning and Pattern Recognition

Adaptation, learning, and optimization, 2014

The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. While the advice and information in this book are believed to be true and accurate at the date of publication, neither the authors nor the editors nor the publisher can accept any legal responsibility for any errors or omissions that may be made. The publisher makes no warranty, express or implied, with respect to the material contained herein.

Fractional Order Dynamics in a Particle Swarm Optimization Algorithm

Seventh International Conference on Intelligent Systems Design and Applications (ISDA 2007), 2007

Although many mathematicians have searched on the fractional calculus since many years ago, its application in engineering, especially in modeling and control, does not have many antecedents. Since there is much freedom in choosing the order of differentiator and integrator in fractional calculus, it is possible to model the physical systems accurately. This paper deals with the time-domain identification fractional-order chaotic systems, where conventional derivation is replaced by a fractional one with the help of a non-integer derivation. This operator is itself approximated by an N-dimensional system composed of an integrator and a phase-lead filter. A hybrid particle swarm optimization (PSO)-genetic algorithm (GA) method is applied to estimate the parameters of the approximated non-linear fractional-order chaotic system modeled by a statespace representation. The feasibility of this approach is demonstrated through identifying the parameters of the approximated fractional-order Lorenz chaotic system. The performance of the proposed algorithm is compared with GA and standard particle swarm optimization (SPSO) in terms of parameter accuracy and cost function. In order to evaluate the identification accuracy, the time-domain output error is designed as the fitness function for parameter optimization. The simulation results show that the proposed method is more successful than the other algorithms for parameter identification of the fractional-order chaotic systems.

Fractional-order quantum particle swarm optimization

PLOS ONE, 2019

Motivated by the concepts of quantum mechanics and particle swarm optimization (PSO), quantum-behaved particle swarm optimization (QPSO) was developed to achieve better global search ability. This paper proposes a new method to improve the global search ability of QPSO with fractional calculus (FC). Based on one of the most frequently used fractional differential definitions, the Grü nwald-Letnikov definition, we introduce its discrete expression into the position updating of QPSO. Extensive experiments on well-known benchmark functions were performed to evaluate the performance of the proposed fractional-order quantum particle swarm optimization (FQPSO). The experimental results demonstrate its superior ability in achieving optimal solutions for several different optimizations.

Loading...

Loading Preview

Sorry, preview is currently unavailable. You can download the paper by clicking the button above.

References (60)

  1. A. Abraham, S. Das, and S. Roy, "Swarm intelligence algorithms for data clustering," in Soft Computing for Knowledge Discovery and Data Mining Book. New York: Springer-Verlag, Oct. 25, 2007, pp. 279-313. Part IV.
  2. P. J. Angeline, "Using selection to improve particle swarm optimization," in Proc. IEEE Congr. Evol. Comput., 1998, pp. 84-89.
  3. P. I. Angeline, "Evolutionary optimization versus particle swarm opti- mization: Philosophy and performance differences," in Proc. Evol. Pro- gram. VII, Conf. EP. New York: Springer-Verlag, Mar. 1998, vol. 1447, pp. 601-610.
  4. T. Back and H. P. Schwefel, "An overview of evolutionary algorithm for parameter optimization," Evol. Comput., vol. 1, no. 1, pp. 1-23, 1993.
  5. T. Back and F. Kursawe, "Evolutionary algorithms for fuzzy logic: A brief overview," in Fuzzy Logic and Soft Computing. Singapore: World Scientific, 1995, pp. 3-10.
  6. J. C. Bezdek, Pattern Recognition With Fuzzy Objective Function Algorithms. New York: Plenum, 1981.
  7. X. Chen and Y. Li, "A modified PSO structure resulting in high exploration ability with convergence guaranteed," IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 37, no. 5, pp. 1271-1289, Oct. 2007.
  8. Y.-P. Chen, W.-C. Peng, and M.-C. Jian, "Particle swarm optimization with recombination and dynamic linkage discovery," IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 37, no. 6, pp. 1460-1470, Dec. 2007.
  9. K. M. Christopher and K. D. Seppi, "The Kalman swarm. A new approach to particle motion in swarm optimization," in Proc. GECCO, 2004, pp. 140-150.
  10. M. Clerc, "The swarm and the queen: Towards a deterministic and adap- tive particle swarm optimization," in Proc. IEEE Congr. Evol. Comput., Jul. 1999, vol. 3, pp. 1951-1957.
  11. D. L. Davies and D. W. Bouldin, "A cluster separation measure," IEEE Trans. Pattern Anal. Mach. Intell., vol. PAMI-1, no. 2, pp. 224-227, Apr. 1979.
  12. J. C. Dunn, "Well separated clusters and optimal fuzzy partitions," J. Cybern., vol. 4, pp. 95-104, 1974.
  13. R. Eberhart, P. Simpson, and R. Dobbins, Computational Intelligence. PC Tools. Boston, MA: Academic, 1996.
  14. A. P. Engelbrecht, Fundamentals of Computational Swarm Intelligence. Hoboken, NJ: Wiley, 2005.
  15. S. C. Esquivel and C. A. Coello Coello, "On the use of particle swarm optimization with multimodal functions," in Proc. IEEE Congr. Evol. Comput., 2003, vol. 2, pp. 1130-1136.
  16. U. M. Fayyad, G. P. Shapire, P. Smyth, and R. Uthurusamy, Advances in Knowledge Discovery and Data Mining. Cambridge, MA: MIT Press, 1996.
  17. H. Frigui and R. Krishnapuram, "Clustering by competitive agglomera- tion," Pattern Recognit., vol. 30, no. 7, pp. 1109-1119, Jul. 1997.
  18. D. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning. Reading, MA: Addison-Wesley, 1989, pp. 1-25.
  19. G. Hammerly, "Learning structure and concepts in data through data clustering," Ph.D. dissertation, Univ. California, San Diego, CA, Jun. 26, 2003.
  20. G. Hammerly and C. Elkan, "Alternatives to the k-means algorithm that find better clusterings," in Proc. 11th ACM CIKM, 2002, pp. 600-607.
  21. M. Halkidi and M. Vazirgiannis, "Clustering validity assessment: Finding the optimal partitioning of a data set," in Proc. 1st IEEE ICDM, 2001, pp. 187-194.
  22. M. Halkidi, Y. Batistakis, and M. Vazirgiannis, "On cluster validation techniques," J. Intell. Inf. Syst., vol. 17, no. 2/3, pp. 107-145, 2001.
  23. H. Higashi and H. Iba, "Particle swarm optimization with Gaussian muta- tion," in Proc. IEEE Swarm Intell. Symp., 2003, pp. 72-79.
  24. A. K. Jain, M. N. Murthy, and P. J. Flynn, "Data clustering: A review," ACM Comput. Rev., vol. 31, no. 3, pp. 264-323, Nov. 1999.
  25. S. Janson and M. Middendorf, "A hierarchical particle swarm optimizer and its adaptive variant," IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 35, no. 6, pp. 1272-1282, Dec. 2005.
  26. B. Kaewkamnerdpong and P. J. Bentley, "Perceptive particle swarm optimization: An investigation," in Proc. IEEE Swarm Intell. Symp., Pasadena, CA, Jun. 8-10, 2005, pp. 169-176.
  27. J. Kennedy and R. Eberhart, "Particle swarm optimization," in Proc. IEEE Int. Conf. Neural Netw., Perth, Australia, 1995, vol. 4, pp. 1942-1948.
  28. J. Koza, Genetic Programming: On the Programming of Computers by Means of Natural Selection. Cambridge, MA: MIT Press, 1992.
  29. R. A. Krohling and L. S. Coelho, "Coevolutionary particle swarm opti- mization using Gaussian distribution for solving constrained optimization problems," IEEE Trans. Syst., Man, Cybern. B, Cybern., vol. 36, no. 6, pp. 1407-1416, Dec. 2006.
  30. J. B. Kruskal, "On the shortest spanning subtree of a graph and the traveling salesman problem," in Proc. AMS, 1956, vol. 7, pp. 48-50.
  31. J. J. Liang and A. K. Qin, "Comprehensive learning particle swarm opti- mizer for global optimization of multimodal functions," IEEE Trans. Evol. Comput., vol. 10, no. 3, pp. 281-295, Jun. 2006.
  32. M. Lovberg, "Improving particle swarm optimization by hybridization of stochastic search heuristics and self-organized criticality," M.S. thesis, Dept. Comput. Sci., Univ. Aarhus, Aarhus, Denmark, 2002.
  33. M. Lovberg and T. Krink, "Extending particle swarm optimisers with self- organized criticality," in Proc. IEEE Congr. Evol. Comput., 2002, vol. 2, pp. 1588-1593.
  34. R. Mendes, J. Kennedy, and J. Neves, "The fully informed particle swarm: Simpler, maybe better," IEEE Trans. Evol. Comput., vol. 8, no. 3, pp. 204- 210, Jun. 2004.
  35. M. Omran, A. Salman, and A. P. Engelbrecht, "Image classification using particle swarm optimization," in Proc. Conf. Simulated Evolution Learn., 2002, vol. 1, pp. 370-374.
  36. M. G. Omran, A. Salman, and A. P. Engelbrecht, "Dynamic clustering us- ing particle swarm optimization with application in image segmentation," Pattern Anal. Appl., vol. 8, no. 4, pp. 332-344, Feb. 2006.
  37. M. G. Omran, A. Salman, and A. P. Engelbrecht, Particle Swarm Optimization for Pattern Recognition and Image Processing. Berlin, Germany: Springer-Verlag, 2006.
  38. N. R. Pal and J. Biswas, "Cluster validation using graph theoretic con- cepts," Pattern Recognit., vol. 30, no. 6, pp. 847-857, Jun. 1997.
  39. B. Peng, R. G. Reynolds, and J. Brewster, "Cultural swarms," in Proc. IEEE Congr. Evol. Comput., 2003, vol. 3, pp. 1965-1971.
  40. T. Peram, K. Veeramachaneni, and C. K. Mohan, "Fitness-distance-ratio based particle swarm optimization," in Proc. IEEE Swarm Intell. Symp., 2003, pp. 174-181.
  41. A. C. Ratnaweera, S. K. Halgamuge, and H. C. Watson, "Particle swarm optimization with self-adaptive acceleration coefficients," in Proc. 1st Int. Conf. Fuzzy Syst. Knowl. Discovery, 2003, pp. 264-268.
  42. A. C. Ratnaweera, S. K. Halgamuge, and H. C. Watson, "Particle swarm optimiser with time varying acceleration coefficients," in Proc. Int. Conf. Soft Comput. Intell. Syst., 2002, pp. 240-255.
  43. R. G. Reynolds, B. Peng, and J. Brewster, "Cultural swarms-Part 2: Virtual algorithm emergence," in Proc. IEEE CEC, Canberra, Australia, 2003, pp. 1972-1979.
  44. J. Riget and J. S. Vesterstrom, "A diversity-guided particle swarm optimizer-The ARPSO," Dept. Comput. Sci., Univ. Aarhus, Aarhus, Denmark, 2002. Tech. Rep.
  45. M. Richards and D. Ventura, "Dynamic sociometry in particle swarm optimization," in Proc. 6th Int. Conf. Comput. Intell. Natural Comput., Cary, NC, Sep. 2003, pp. 1557-1560.
  46. Y. Shi and R. C. Eberhart, "A modified particle swarm optimizer," in Proc. IEEE Congr. Evol. Comput., 1998, pp. 69-73.
  47. Y. Shi and R. C. Eberhart, "Fuzzy adaptive particle swarm optimization," in Proc. IEEE Congr. Evol. Comput., 2001, vol. 1, pp. 101-106.
  48. J. T. Tou and R. C. Gonzalez, Pattern Recognition Principles. London, U.K.: Addison-Wesley, 1974.
  49. R. H. Turi, "Clustering-based colour image segmentation," Ph.D. disser- tation, Monash Univ., Melbourne, Australia, 2001.
  50. F. Van den Bergh, "An analysis of particle swarm optimizers," Ph.D. dissertation, Dept. Comput. Sci., Univ. Pretoria, Pretoria, South Africa, 2002.
  51. F. Van den Bergh and A. P. Engelbrecht, "A new locally convergent particle swarm optimizer," in Proc. IEEE Int. Conf. Syst., Man, Cybern., 2002, pp. 96-101.
  52. F. Van den Bergh and A. P. Engelbrecht, "A cooperative approach to particle swarm optimization," IEEE Trans. Evol. Comput., vol. 8, no. 3, pp. 225-239, Jun. 2004.
  53. E. O. Wilson, Sociobiology: The New Synthesis. Cambridge, MA: Belknap Press, 1975.
  54. X. Xie, W. Zhang, and Z. Yang, "A dissipative particle swarm optimiza- tion," in Proc. IEEE Congr. Evol. Comput., 2002, vol. 2, pp. 1456-1461.
  55. X. Xie, W. Zhang, and Z. Yang, "Adaptive particle swarm optimization on individual level," in Proc. 6th Int. Conf. Signal Process., 2002, vol. 2, pp. 1215-1218.
  56. X. Xie, W. Zhang, and Z. Yang, "Hybrid particle swarm optimizer with mass extinction," in Proc. Int. Conf. Commun., Circuits Syst., 2002, vol. 2, pp. 1170-1173.
  57. K. Yasuda, A. Ide, and N. Iwasaki, "Adaptive particle swarm optimization," in Proc. IEEE Int. Conf. Syst., Man, Cybern., 2003, vol. 2, pp. 1554-1559.
  58. W.-J. Zhang and X.-F. Xie, "DEPSO: Hybrid particle swarm with differ- ential evolution operator," in Proc. IEEE Int. Conf. Syst., Man, Cybern., 2003, vol. 4, pp. 3816-3821.
  59. B. Zhang and M. Hsu, "K-harmonic means-A data clustering algorithm," Hewlett-Packard Labs, Palo Alto, CA, Tech. Rep. HPL-1999- 124, 1999.
  60. W.-J. Zhang, Y. Liu, and M. Clerc, "An adaptive PSO algorithm for re- active power optimization," in Proc. APSCOM, Hong Kong, 2003, vol. 1, pp. 302-307.