MATHEMATICALLY DESIGNED ARTIFICIAL NEURAL NETWORKS WITH GAUSS NEWTON ALGORITHM (original) (raw)

Optimizing the Multilayer Feed-Forward Artificial Neural Networks Architecture and Training Parameters using Genetic Algorithm

International Journal of Computer Applications, 2014

Determination of optimum feed forward artificial neural network (ANN) design and training parameters is an extremely important mission. It is a challenging and daunting task to find an ANN design, which is effective and accurate. This paper presents a new methodology for the optimization of ANN parameters as it introduces a process of training ANN which is effective and less human-dependent. The derived ANN achieves satisfactory performance and solves the timeconsuming task of training process. A Genetic Algorithm (GA) has been used to optimize training algorithms, network architecture (i.e. number of hidden layer and neurons per layer), activation functions, initial weight, learning rate, momentum rate, and number of iterations. The preliminary result of the proposed approach has indicated that the new methodology can optimize designing and training parameters precisely, and resulting in ANN where satisfactory performance.

" Neural Network " a Supervised Machine Learning Algorithm

As a machine learning algorithm, neural network has been widely used in various research projects to solve various critical problems. The concept of neural networks is inspired from the human brain. The paper will explain the actual concept of Neural Networks such that a non-skilled person can understand basic concept and also make use of this algorithm to solve various tedious and complex problems. The paper demonstrates the designing and implementation of fully design Neural Network along with the codes. It gives various architectures of ANN also the advantages, disadvantages & applications.

Principle of Neural Network and Its Main Types: Review

2020

In this paper, an overview of the artificial neural networks is presented. Their main and popular types such as the multilayer feedforward neural network (MLFFNN), the recurrent neural network (RNN), and the radial basis function (RBF) are investigated. Furthermore, the main advantages and disadvantages of each type are included as well as the training process.

Analytical and Systematic Study of Artificial Neural Network

IRJET, 2022

Artificial neural network is a neural network motivated by the biological neural network where numbers of neurons are interconnected with each other. The neurons in artificial neural network are operating and arranged in such a manner that they invigorate the fundamental construction of the neurons in the human cerebrum. The fundamental rationale behind the artificial neural network is to implement the structure of the biological neuron in a way that the whatever the function that the human brain perform with the aid of biological neural network, that thusly will the machines and frameworks can likewise perform with the assistance of artificial neural network. This paper gives a concise outline of an artificial neural network with its working, architectures, and general organization, and so forth. This paper additionally express some advantage and disadvantage , application of the artificial neural network in some most significant areas such as image processing, signal processing, pattern recognition, function approximation, forecasting based on the past data and some more.

Artificial Neural Network (ANN)

IUPAC Standards Online, 2017

This research work presents new development in the field of natural science, where comparison is made theoretically on the efficiency of both classical regression models and that of artificial neural network models, with various transfer functions without data consideration. The results obtained based on variance estimation indicates that ANN is better which coincides with the results of Authors in the past on the efficiency of ANN over the traditional regression models. The certain conditions required for ANN efficiency over the conventional regression models were noted only that the optimal number of hidden layers and neurons needed to achieve minimum error is still open to further investigation.

A STUDY ON ARTIFICIAL NEURAL NETWORKS

A STUDY, 2018

First step towards AI is taken by Warren McCulloch a neurophysist and a mathematician Walter Pitts. They modelled a simple neural network with electrical circuits and got the results very accurate and derived a remarkable ability of neurons to perceive information from complicated and imprecise data. During the present study it was observed that trained neural network expert in analyzing the information has been provided with other advantages as Adaptive learning, Real Time operation, self-organization and Fault tolerance as well. Apart from convectional computing, neural networking use different processing units (Neurons) in parallel with each other. These need not to be programmed. They function just like human brain. We need to give it examples to solve different problems and these examples must be selected carefully so that it would not be waste of time.we use combination of neural networking and computational programming to achieve maximal efficiency right now but neural networking will eventually take over in future. We introduced artificial neural networking in which electronic models where used as neural structure of brain. Computers can store data as ledgers etc. but have difficulty in recognizing patterns but brain stores information as patterns. Further as artificial neural networking was introduced which has artificial neurons who act as real neurons and do functions as they do. They are used for speech, hearing, reorganization, storing information as patterns and many other functions which a human brain can do. These neural networks were combined and dynamically self-combined which is not true for any artificial networking. These neurons work as groups and sub divide the problem to resolve it. These are grouped in layers and it is art of engineering to make them solve real world problems. The most important thing is the connections between the neurons, it is glue to system as it is excitation inhibition process as the input remains constant one neuron excites while other inhibits as in subtraction addition process. Basically, all ANN have same network that is input, feedback or hidden and output.

An Improved Gauss-Newtons Method based Back-propagation Algorithm for Fast Convergence

International Journal of …, 2012

The present work deals with an improved back-propagation algorithm based on Gauss-Newton numerical optimization method for fast convergence. The steepest descent method is used for the back-propagation. The algorithm is tested using various datasets and compared with the steepest descent backpropagation algorithm. In the system, optimization is carried out using multilayer neural network. The efficacy of the proposed method is observed during the training period as it converges quickly for the dataset used in test. The requirement of memory for computing the steps of algorithm is also analyzed.

Neural Networks and Back Propagation Algorithm

Neural Networks (NN) are important data mining tool used for classification and clustering. It is an attempt to build machine that will mimic brain activities and be able to learn. NN usually learns by examples. If NN is supplied with enough examples, it should be able to perform classification and even discover new trends or patterns in data. Basic NN is composed of three layers, input, output and hidden layer. Each layer can have number of nodes and nodes from input layer are connected to the nodes from hidden layer. Nodes from hidden layer are connected to the nodes from output layer. Those connections represent weights between nodes. This paper describes one of most popular NN algorithms, Back Propagation (BP) Algorithm. The aim is to show the logic behind this algorithm. Idea behind BP algorithm is quite simple, output of NN is evaluated against desired output. If results are not satisfactory, connection (weights) between layers are modified and process is repeated again and again until error is small enough. Simple BP example is demonstrated in this paper with NN architecture also covered. New implementation of BP algorithm are emerging and there are few parameters that could be changed to improve performance of BP.