Algorithm/Architecture Study for Artificial Neural Nets (original) (raw)

A dynamic architecture for artificial neural networks

Artificial neural networks (ANN), have shown to be an effective, general-purpose approach for pattern recognition, classification, clustering, and prediction. Traditional research in this area uses a network with a sequential iterative learning process based on the feed-forward, back-propagation algorithm. In this paper, we introduce a model that uses a different architecture compared to the traditional neural network, to capture and forecast nonlinear processes. This approach utilizes the entire observed data set simultaneously and collectively to estimate the parameters of the model. To assess the effectiveness of this method, we have applied it to a marketing data set and a standard benchmark from ANN literature (Wolf's sunspot activity data set). The results show that this approach performs well when compared with traditional models and established research.

Towards a Theoretical Basis For Modelling of Hidden Layer Architecture In Artificial Neural Networks

Artificial neural networks (ANNs) are mathematical and computational models that are inspired by the biological neural systems. Just like biological neural networks become experts by learning from the surrounding, ANNs also have the ability to be experts in the particular area by training the network. Despite of their many advantages, there are some unsolved problems in applying artificial neural networks. Determine the most efficient architecture for the given task is identified as one of those major issues. This paper provides a pruning algorithm based on the backpropagation training algorithm to obtain the optimal solution of ANN. The pruning is done according to the synaptic pruning in biological neural system. Experiments were done with some well known problems in machine learning and artificial neural networks and results show that the new model performs better than the initial network in training data sets.

A STUDY ON ARTIFICIAL NEURAL NETWORKS

A STUDY, 2018

First step towards AI is taken by Warren McCulloch a neurophysist and a mathematician Walter Pitts. They modelled a simple neural network with electrical circuits and got the results very accurate and derived a remarkable ability of neurons to perceive information from complicated and imprecise data. During the present study it was observed that trained neural network expert in analyzing the information has been provided with other advantages as Adaptive learning, Real Time operation, self-organization and Fault tolerance as well. Apart from convectional computing, neural networking use different processing units (Neurons) in parallel with each other. These need not to be programmed. They function just like human brain. We need to give it examples to solve different problems and these examples must be selected carefully so that it would not be waste of time.we use combination of neural networking and computational programming to achieve maximal efficiency right now but neural networking will eventually take over in future. We introduced artificial neural networking in which electronic models where used as neural structure of brain. Computers can store data as ledgers etc. but have difficulty in recognizing patterns but brain stores information as patterns. Further as artificial neural networking was introduced which has artificial neurons who act as real neurons and do functions as they do. They are used for speech, hearing, reorganization, storing information as patterns and many other functions which a human brain can do. These neural networks were combined and dynamically self-combined which is not true for any artificial networking. These neurons work as groups and sub divide the problem to resolve it. These are grouped in layers and it is art of engineering to make them solve real world problems. The most important thing is the connections between the neurons, it is glue to system as it is excitation inhibition process as the input remains constant one neuron excites while other inhibits as in subtraction addition process. Basically, all ANN have same network that is input, feedback or hidden and output.

An Overview of Neural Network

American Journal of Neural Networks and Applications, 2019

Neural networks represent a brain metaphor for information processing. These models are biologically inspired rather than an exact replica of how the brain actually functions. Neural networks have been shown to be very promising systems in many forecasting applications and business classification applications due to their ability to learn from the data. This article aims to provide a brief overview of artificial neural network. The artificial neural network learns by updating the network architecture and connection weights so that the network can efficiently perform a task. It can learn either from available training patterns or automatically learn from examples or input-output relations. Neural network-based models continue to achieve impressive results on longstanding machine learning problems, but establishing their capacity to reason about abstract concepts has proven difficult. Building on previous efforts to solve this important feature of general-purpose learning systems, our latest paper sets out an approach for measuring abstract reasoning in learning machines, and reveals some important insights about the nature of generalization itself. Artificial neural networks can learn by example like the way humans do. An artificial neural net is configured for a specific application like pattern recognition through a learning process. Learning in biological systems consists of adjustments to the synaptic connections that exist between neurons. This is true of artificial neural networks as well. Artificial neural networks can be applied to an increasing number of real-world problems of considerable complexity. They are used for solving problems that are too complex for conventional technologies or those types of problems that do not have an algorithmic solution.

Foundations of Artificial Neural Networks

Models of Neurons and Perceptrons: Selected Problems and Challenges, 2018

The rapid growth of computational power of computers is one of the basic qualities in the development of computer science. Therefore, informatics is applied to solving more and more complex problems and, what follows, the demand for bigger and more complex software occurs. It is not always possible, however, to use classical algorithmic methods to create such a type of software. There are two reasons for it. First of all, a good model of the relation between the input and output parameters often either does not exist at all or it cannot be created at the present level of scientific knowledge. It is worth of mentioning that the algorithmic approach requires the knowledge of the explicit form of the mapping between the aforementioned sets of parameters. Secondly, even if the model is given, the algorithmic approach can be impossible regarding its over-complexity. It can be both complexity of the task on the stage of the algorithm creating, and too slow working of the implemented system. The latter one is a critical parameter especially in the on-line systems. Therefore, the alternative approaches, in comparison with the classical algorithmic approach, are developed intensively. Artificial neural networks are included into this group of methods. The neurophysiological studies of functional properties of nervous systems enabled researchers at the beginning of the 1940's to formulate the cybernetic model of the neuron [58] which, slightly modified, is commonly used up to present. At the turn of 1950's and 1960's the first artificial neural systems-PERCEPTRON and ADALINE were constructed. They were electromechanical systems. The first algorithms for setting of the synaptic weights in such a type of systems were worked out. Those pioneering attempts attracted attention to the possibilities of such systems. At the same time, however, significant limits were discovered. Nowadays, from the perspective of the time, it is known that on the one hand, the limits were caused by the lack of proper mathematical models of neural networks. On the other hand, they were caused by the application of just one type of artificial neural networks-the multilayer

Neural Networks

Chapman & Hall/CRC Computer & Information Science Series, 2007

A Model for Artificial Neural Networks Architecture

Artificial Neural Networks are composed of a large number of simple computational units operating in parallel they have the potential to provide fault tolerance. One extremely motivating possessions of genetic neural networks of the additional urbanized human body and other animal is their open-mindedness against injure or destroyed to individual neurons. In the case of biological neural networks a solution tolerant to loss of neurons has a high priority since a graceful degradation of performance is very important to the survival of the organism. We propose a simple modification of the training procedure commonly used with the Back-Propagation algorithm in order to increase the tolerance of the feed forward multi-layered ANN to internal hardware failures such as the loss of hidden units.

An introduction to neural networks

Computers & Mathematics with Applications, 1996

The presented technical report is a preliminary English translation of selected revised sections from the rst part of the book Theoretical Issues of Neural Networks 75] by the rst author which represents a brief introduction to neural networks. This work does not cover a complete survey of the neural network models but the exposition here is focused more on the original motivations and on the clear technical description of several basic type models. It can be understood as an invitation to a deeper study of this eld. Thus, the respective b a c kground is prepared for those who have not met this phenomenon yet so that they could appreciate the subsequent theoretical parts of the book. In addition, this can also be pro table for those engineers who want t o a p p l y the neural networks in the area of their expertise. The introductory part does not require deeper preliminary knowledge, it contains many pictures and the mathematical formalism is reduced to the lowest degree in the rst chapter and it is used only for a technical description of neural network models in the following chapters. We will come back to the formalization of some of these introduced models within their theoretical analysis. The rst chapter makes an e ort to describe and clarify the neural network phenomenon. It contains a brief survey of the history of neurocomputing and it explains the neurophysiological motivations which led to the mathematical model of a neuron and neural network. It shows that a particular model of the neural network can be determined by means of the architectural, computational, and adaptive dynamics that describe the evolution of the speci c neural network parameters in time. Furthermore, it introduces neurocomputers as an alternative to the classical von Neumann computer architecture and the appropriate areas of their applications are discussed. The second chapter deals with the classical models of neural networks. First, the historically oldest model | the network of perceptrons is shortly mentioned. Further, the most widely applied model in practice | the multi{layered neural network with the back-propagation learning algorithm, is described in detail. The respective description, besides various variants of this model, contains implementation comments as well. The explanation of the linear model MADALINE, adapted according to the Widrow rule, follows. The third chapter is concentrated on the neural network models that are exploited as autoassociative or heteroassociative memories. The principles of the adaptation according to Hebb law are explained on the example of the linear associator neural network. The next model is the well{known Hop eld network, motivated by p h ysical theories, which is a representative of the cyclic neural networks. The analog version of this network can be used for heuristic solving of the optimization tasks (e. g. traveling salesman problem). By the physical analogy, a temperature parameter is introduced into the Hop eld network and thu s , a s t o c hastic model, the so{called Boltzmann machine is obtained. The information from this part of the book can be found in an arbitrary monograph or in survey articles concerning neural networks. For its composition we issued namely from the works 16, 24, 26, 27, 35, 36, 45, 73].