The Influence of Genetic Algorithms on Learning Possibilities of Artificial Neural Networks (original) (raw)
Related papers
`Critical Analysis of Genetic Algorithm and Artificial Neural Network Applications
There are many popular problems in different practical fields of computer sciences. Neural networks and genetic algorithms have powerful ability to solve problems. They have attracted a great deal of research. A lot of GA and ANN have been developed to enhance the performance. They are mathematical in nature. Neural network work is based on back propagation learning. However, the choice of the basic parameter (network topology, learning rate, initial weights) often already determines the success of the training process. The selection of this parameter follows in practical use rules of thumb. Genetic algorithms are widely used search methods, its methods based on methods like selection, crossover and mutation. This study examines how genetic algorithms can be used. This paper makes a comparison between applications of GA and ANN. They are evaluated according to their performance on academic and practical problems of different complexity
Study of Genetic Algorithm to Fully-automate the Design and Training of Artificial Neural Network
2009
Optimization of artificial neural network (ANN) parameters design for full-automation ability is an extremely important task, therefore it is challenging and daunting task to find out which is effective and accurate method for ANN prediction and optimization. This paper presents different procedures for the optimization of ANN with aim to: solve the time-consuming of learning process, enhancing generalizing ability, achieving robust and accurate model, and to reduce the computational complexity. A Genetic Algorithm (GA) has been used to optimize operational parameters (input variables), and we plan to optimize neural network architecture (i.e. number of hidden layer and neurons per layer), weight, types, training algorithms, activation functions, learning rate, momentum rate, number of iterations, and dataset partitioning ratio. A hybrid neural network and genetic algorithm model for the determination of optimal operational parameter settings based on the proposed approach was developed. The preliminary result of the model has indicated that the new model can optimize operational parameters precisely and quickly, subsequently, satisfactory performance.
Genetic algorithms and neural networks: optimizing connections and connectivity
Parallel Computing, 1990
Genetic algorithms are a robust adaptive optimization method based on biological principles. A population of strings representing possible problem solutions is maintained. Search proceeds by recombining strings in the population. The theoretical foundations of genetic algorithms are based on the notion that selective reproduction and recombination of binary strings changes the sampling rate of hyperplanes in the search space so as to reflect the average fitness of strings that reside in any particular hyperplane. Thus, genetic algorithms need not search along the contours of the function being optimized and tend not to become trapped in local minima. This paper is an overview of several different experiments applying genetic algorithms to neural network problems. These problems include (1) optimizing the weighted connections in feed-forward neural networks using both binary and real-valued representations, and (2) using a genetic algorithm to discover novel architectures in the form of connectivity patterns for neural networks that learn using error propagation. Future applications in neural network optimization in which genetic algorithm can perhaps play a significant role are also presented.
Application of Artificial Neural Networks in Genetic Algorithm Control Problems
In contemporary intelligent decision support systems, there is still a problem associated with increasing the performance speed of the structural-parametric synthesis of large discrete systems with a given behavior based on genetic algorithms. Currently, there are two main research areas that are designed for mathematical or hardware performance speed improvement. One way to improve hardware performance speed is the use of parallel computing, which includes general-purpose computing on graphics processing units (GPGPU). This article deals with the possibility of improving the performance speed of intelligent systems using the mathematical tool of artificial neural networks by introducing a control module of the genetic algorithm directly when performing the synthesis of solutions. Control of the structural-parametric synthesis process is achieved by predicting and evaluating the state of the genetic algorithm (convergence, attenuation, finding the population in local extremes) using artificial neural networks. This allows changing the operating parameters directly in the course of decision synthesis, changing their destructive ability relative to the binary string, which leads to a change in the trajectory of the population in the decision space, and as a result, should help to improve the performance speed of intelligent decision support systems.
Tuning of the structure and parameters of a neural network using an improved genetic algorithm
Neural Networks, IEEE …, 2003
This paper presents the tuning of the structure and parameters of a neural network using an improved genetic algorithm (GA). It will also be shown that the improved GA performs better than the standard GA based on some benchmark test functions. A neural network with switches introduced to its links is proposed. By doing this, the proposed neural network can learn both the input-output relationships of an application and the network structure using the improved GA. The number of hidden nodes is chosen manually by increasing it from a small number until the learning performance in terms of fitness value is good enough. Application examples on sunspot forecasting and associative memory are given to show the merits of the improved GA and the proposed neural network.
Comparative evaluation of genetic algorithm and backpropagation for training neural networks
Information Sciences, 2000
In view of several limitations of gradient search techniques (e.g. backpropagation), global search techniques, including evolutionary programming and genetic algorithms (GAs), have been proposed for training neural networks (NNs). However, the eectiveness, ease-of-use, and eciency of these global search techniques have not been compared extensively with gradient search techniques. Using ®ve chaotic time series functions, this paper empirically compares a genetic algorithm with backpropagation for training NNs. The chaotic series are interesting because of their similarity to economic and ®nancial series found in ®nancial markets. Ó
Generation and simplification of Artificial Neural Networks by means of Genetic Programming
Neurocomputing, 2010
The development of Artificial Neural Networks (ANNs) is traditionally a slow process in which human experts are needed to experiment on different architectural procedures until they find the one that presents the correct results that solve a specific problem. This work describes a new technique that uses Genetic Programming (GP) in order to automatically develop simple ANNs, with a low number of neurons and connections. Experiments have been carried out in order to measure the behavior of the system and also to compare the results obtained using other ANN generation and training methods with evolutionary computation (EC) tools. The obtained results are, in the worst case, at least comparable to existing techniques and, in many cases, substantially better. As explained herein, the system has other important features such as variable discrimination, which provides new information on the problems to be solved.
Neural network world: A neural network based selection method for genetic algorithms
Neural Network World, 2012
Genetic algorithms (GAs) are stochastic methods that are widely used in search and optimization. The breeding process is the main driving mechanism for GAs that leads the way to find the global optimum. And the initial phase of the breeding process starts with parent selection. The selection utilized in a GA is effective on the convergence speed of the algorithm. A GA can use different selection mechanisms for choosing parents from the population and in many applications the process generally depends on the fitness values of the individuals. Artificial neural networks (ANNs) are used to decide the appropriate parents by the new hybrid algorithm proposed in this study. And the use of neural networks aims to produce better offspring during the GA search. The neural network utilized in this algorithm tries to learn the structural patterns and correlations that enable two parents to produce high-fit offspring. In the breeding process, the first parent is selected based on the fitness value as usual. Then it is the neural network that decides the appropriate mate for the first parent chosen. Hence, the selection mechanism is not solely dependent on the fitness values in this study. The algorithm is tested with seven benchmark functions. It is observed from results of these tests that the new selection method leads genetic algorithm to converge faster.
Empirical studies on the speed of convergence of neural network training using genetic algorithms
Proceedings of the Eighth National Conference on …, 1990
This paper reports several experimental results on the speed of convergence of neural network training using genetic algorithms and back propagation. Recent excitement regarding genetic search lead some researchers to apply it to training neural networks. There are reports on both successful and faulty results, and, unfortunately, no systematic evaluation has been made. This paper reports results of systematic experiments designed to judge whether use of genetic algorithms provides any gain in neural network training over existing methods. Experimental results indicate that genetic search is, at best, equally efficient to faster variants of back propagation in very small scale networks, but far less efficient in larger networks.