Andries Engelbrecht | University of Pretoria (original) (raw)
Uploads
Papers by Andries Engelbrecht
2015 IEEE Congress on Evolutionary Computation (CEC), 2015
Lecture Notes in Computer Science, 2014
This paper proposes variants of the angle modulated particle swarm optimization (AMPSO) algorithm... more This paper proposes variants of the angle modulated particle swarm optimization (AMPSO) algorithm. A number of limitations of the original AMPSO algorithm are identified and the proposed variants aim to remove these limitations. The new variants are then compared to AMPSO on a number of binary problems in various dimensions. It is shown that the performance of the variants is superior to AMPSO in many problem cases. This indicates that the identified limitations may have a significant effect on performance, but that the effects can be overcome by removing those limitations. It is also observed that the ability of the variants to initialize a wider range of potential solutions can be helpful during the search process.
Lecture Notes in Computer Science, 2014
Lecture Notes in Computer Science, 2014
Lecture Notes in Computer Science, 2014
This paper has two primary aims. Firstly, to empirically verify the use of a specially designed o... more This paper has two primary aims. Firstly, to empirically verify the use of a specially designed objective function for particle swarm optimization (PSO) convergence analysis. Secondly, to investigate the impact of PSO's social topology on the parameter region needed to ensure convergent particle behavior. At present there exists a large number of theoretical PSO studies, however, all stochastic PSO models contain the stagnation assumption, which implicitly removes the social topology from the model, making this empirical study necessary. It was found that using a specially designed objective function for convergence analysis is both a simple and valid method for convergence analysis. It was also found that the derived region needed to ensure convergent particle behavior remains valid regardless of the selected social topology.
2014 IEEE Congress on Evolutionary Computation (CEC), 2014
2014 IEEE Symposium on Swarm Intelligence, 2014
2014 International Conference on Signal Processing and Integrated Networks (SPIN), 2014
2014 IEEE Congress on Evolutionary Computation (CEC), 2014
This paper performs a thorough empirical investigation of the conditions placed on particle swarm... more This paper performs a thorough empirical investigation of the conditions placed on particle swarm optimization control parameters to ensure convergent behavior. At present there exists a large number of theoretically derived parameter regions that will ensure particle convergence, however, selecting which region to utilize in practice is not obvious. The empirical study is carried out over a region slightly larger than that needed to contain all the relevant theoretically derived regions. It was found that there is a very strong correlation between one of the theoretically derived regions and the empirical evidence. It was also found that parameters near the edge of the theoretically derived region converge at a very slow rate, after an initial population explosion. Particle convergence is so slow, that in practice, the edge parameter settings should not really be considered useful as convergent parameter settings.
2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL), 2014
IEEE Congress on Evolutionary Computation, 2003
This paper investigates the effectiveness of various particle swarm optimiser structures to learn... more This paper investigates the effectiveness of various particle swarm optimiser structures to learn how to play the game of checkers. Co-evolutionary techniques are used to train the game playing agents. Performance is compared against a player making moves at random. Initial experimental results indicate definite advantages in using certain information sharing structures and swarm size configurations to successfully learn the
This research investigates a swarm intelligence based multiobjective optimization algorithm for o... more This research investigates a swarm intelligence based multiobjective optimization algorithm for optimizing the behavior of a group of Articial Neural Networks (ANNs), where each ANN specializes to solving a specic part of a task, such that the group as a whole achieves an eective solution. Niche Particle Swarm Optimization (NichePSO) is a speciation technique that has proven eective at locating multiple solutions in complex multivariate tasks. This research evaluates the ecacy of the NichePSO method for training a group of ANNs that form a neural network ensemble (NNE) for the purpose of solving a set of multivariate tasks. NichePSO is compared with a gradient descent method for training a set of individual ANNs to solve dierent parts of a multivariate task, and then combining the outputs of each ANN into a single solution. To date, there has been little research that has compared the eectiveness of applying NichePSO versus more traditional supervised learning methods for the training of neural network ensembles.
ABSTRACT F-Race and its variant, Iterated F-Race, is an automated procedure for sampling and eval... more ABSTRACT F-Race and its variant, Iterated F-Race, is an automated procedure for sampling and evaluating potential values of parameters for algorithms. The procedure is controlled by means of a computational budget that limits the number of evaluations that may be conducted, thus forcing the determi- nation of the best possible configuration to be made within a limited time. When time is not severely constrained, the a priori choice of a computational budget becomes unjustifiable because the relationship between the computational budget and the quality of the optimization of a black box subject is not obvious. This paper proposes an extension to F-Race in the form of a heuristic method for reasonably terminating the optimization procedure.
Many training algorithms (like gradient descent, forexample) use random initial weights. These al... more Many training algorithms (like gradient descent, forexample) use random initial weights. These algorithmsare rather sensitive to their starting positionin the error space, which is represented by their initialweights. This paper shows that the trainingperformance can be improved significantly by usinga Particle Swarm Optimizer (PSO) to initialize theweights, rather than random initialization.INTRODUCTIONIt has been shown that Multi-Layer Perceptron(MLP) networks can be
ABSTRACT This paper investigates the algorithm selection problem, otherwise referred to as the en... more ABSTRACT This paper investigates the algorithm selection problem, otherwise referred to as the entity-to-algorithm allocation problem, within the context of three recent multi-method algorithm frameworks. A population-based algorithm portfolio, a meta-hyper-heuristic and a bandit based operator selection method are evaluated under similar conditions on a diverse set of floating-point benchmark problems. The meta-hyper heuristic is shown to outperform the other two algorithms.
2015 IEEE Congress on Evolutionary Computation (CEC), 2015
Lecture Notes in Computer Science, 2014
This paper proposes variants of the angle modulated particle swarm optimization (AMPSO) algorithm... more This paper proposes variants of the angle modulated particle swarm optimization (AMPSO) algorithm. A number of limitations of the original AMPSO algorithm are identified and the proposed variants aim to remove these limitations. The new variants are then compared to AMPSO on a number of binary problems in various dimensions. It is shown that the performance of the variants is superior to AMPSO in many problem cases. This indicates that the identified limitations may have a significant effect on performance, but that the effects can be overcome by removing those limitations. It is also observed that the ability of the variants to initialize a wider range of potential solutions can be helpful during the search process.
Lecture Notes in Computer Science, 2014
Lecture Notes in Computer Science, 2014
Lecture Notes in Computer Science, 2014
This paper has two primary aims. Firstly, to empirically verify the use of a specially designed o... more This paper has two primary aims. Firstly, to empirically verify the use of a specially designed objective function for particle swarm optimization (PSO) convergence analysis. Secondly, to investigate the impact of PSO's social topology on the parameter region needed to ensure convergent particle behavior. At present there exists a large number of theoretical PSO studies, however, all stochastic PSO models contain the stagnation assumption, which implicitly removes the social topology from the model, making this empirical study necessary. It was found that using a specially designed objective function for convergence analysis is both a simple and valid method for convergence analysis. It was also found that the derived region needed to ensure convergent particle behavior remains valid regardless of the selected social topology.
2014 IEEE Congress on Evolutionary Computation (CEC), 2014
2014 IEEE Symposium on Swarm Intelligence, 2014
2014 International Conference on Signal Processing and Integrated Networks (SPIN), 2014
2014 IEEE Congress on Evolutionary Computation (CEC), 2014
This paper performs a thorough empirical investigation of the conditions placed on particle swarm... more This paper performs a thorough empirical investigation of the conditions placed on particle swarm optimization control parameters to ensure convergent behavior. At present there exists a large number of theoretically derived parameter regions that will ensure particle convergence, however, selecting which region to utilize in practice is not obvious. The empirical study is carried out over a region slightly larger than that needed to contain all the relevant theoretically derived regions. It was found that there is a very strong correlation between one of the theoretically derived regions and the empirical evidence. It was also found that parameters near the edge of the theoretically derived region converge at a very slow rate, after an initial population explosion. Particle convergence is so slow, that in practice, the edge parameter settings should not really be considered useful as convergent parameter settings.
2014 IEEE Symposium on Computational Intelligence in Ensemble Learning (CIEL), 2014
IEEE Congress on Evolutionary Computation, 2003
This paper investigates the effectiveness of various particle swarm optimiser structures to learn... more This paper investigates the effectiveness of various particle swarm optimiser structures to learn how to play the game of checkers. Co-evolutionary techniques are used to train the game playing agents. Performance is compared against a player making moves at random. Initial experimental results indicate definite advantages in using certain information sharing structures and swarm size configurations to successfully learn the
This research investigates a swarm intelligence based multiobjective optimization algorithm for o... more This research investigates a swarm intelligence based multiobjective optimization algorithm for optimizing the behavior of a group of Articial Neural Networks (ANNs), where each ANN specializes to solving a specic part of a task, such that the group as a whole achieves an eective solution. Niche Particle Swarm Optimization (NichePSO) is a speciation technique that has proven eective at locating multiple solutions in complex multivariate tasks. This research evaluates the ecacy of the NichePSO method for training a group of ANNs that form a neural network ensemble (NNE) for the purpose of solving a set of multivariate tasks. NichePSO is compared with a gradient descent method for training a set of individual ANNs to solve dierent parts of a multivariate task, and then combining the outputs of each ANN into a single solution. To date, there has been little research that has compared the eectiveness of applying NichePSO versus more traditional supervised learning methods for the training of neural network ensembles.
ABSTRACT F-Race and its variant, Iterated F-Race, is an automated procedure for sampling and eval... more ABSTRACT F-Race and its variant, Iterated F-Race, is an automated procedure for sampling and evaluating potential values of parameters for algorithms. The procedure is controlled by means of a computational budget that limits the number of evaluations that may be conducted, thus forcing the determi- nation of the best possible configuration to be made within a limited time. When time is not severely constrained, the a priori choice of a computational budget becomes unjustifiable because the relationship between the computational budget and the quality of the optimization of a black box subject is not obvious. This paper proposes an extension to F-Race in the form of a heuristic method for reasonably terminating the optimization procedure.
Many training algorithms (like gradient descent, forexample) use random initial weights. These al... more Many training algorithms (like gradient descent, forexample) use random initial weights. These algorithmsare rather sensitive to their starting positionin the error space, which is represented by their initialweights. This paper shows that the trainingperformance can be improved significantly by usinga Particle Swarm Optimizer (PSO) to initialize theweights, rather than random initialization.INTRODUCTIONIt has been shown that Multi-Layer Perceptron(MLP) networks can be
ABSTRACT This paper investigates the algorithm selection problem, otherwise referred to as the en... more ABSTRACT This paper investigates the algorithm selection problem, otherwise referred to as the entity-to-algorithm allocation problem, within the context of three recent multi-method algorithm frameworks. A population-based algorithm portfolio, a meta-hyper-heuristic and a bandit based operator selection method are evaluated under similar conditions on a diverse set of floating-point benchmark problems. The meta-hyper heuristic is shown to outperform the other two algorithms.