Feedback Network - an overview (original) (raw)

CHNN are single-layer feedback networks, which operate in continuous time and with continuous node, or neuron, input and output values in the interval [−1,1].

From: Journal of Advanced Research, 2014

Chapters and Articles

You might find these chapters and articles relevant to this topic.

Use of artificial neural networks (ANNs) in colour measurement

M. Senthilkumar, in Colour Measurement, 2010

Feedback networks

Feedback networks can have signals travelling in both directions by introducing loops in the network. Networks of this type operate by allowing neighbouring neurons to adjust other nearby neurons either in a positive or negative direction. Feedback networks are changing continuously until they reach an equilibrium point, where they remain until the input changes and a new equilibrium needs to be found. Feedback architectures are also referred to as interactive or recurrent neural networks.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781845695590500042

A review on artificial intelligence based load demand forecasting techniques for smart grid and buildings

Muhammad Qamar Raza, Abbas Khosravi, in Renewable and Sustainable Energy Reviews, 2015

1.14.2 Feedback neural network

Feedback neural network allows the information to move in both directions as close loop network architecture as shown in Fig. 15. The output of network influenced the input to achieve the objective function of network by back propagating the error information. However the feedback network is dynamic in nature because of continuous change of the state to achieve the equilibrium. As the change in inputs of network is applied, the network tries to achieve the new equilibrium state from previous state. Feedback neural network are suitable for dynamic and complex processes as well as time varying or time lagged patterns problem [48].

Read full article

URL:

https://www.sciencedirect.com/science/article/pii/S1364032115003354

Digital twin driven design evaluation

Lei Wang, ... A.Y.C. Nee, in Digital Twin Driven Smart Design, 2020

Feedback network

Feedback network is similar to prediction network in structure. Taking Second_layer as an example, a recursive algorithm is adopted to calculate the new network node SLijk+:

(5.9)Etotal=∑u=1r12(FEIu+−FEIu)2=∑u=1rEu

(5.10)∂Etotal∂SLijk=∂Etotal∂outjk+1×∂outjk+1∂netjk+1×∂netjk+1∂SLijk

(5.11)∂Etotal∂out1k+1=∂Etotal∂out1k+2×∂out1k+2∂net1k+2×∂net1k+2∂out1k+1

(5.12)∂Etotal∂outjh=∑u=1r∂Eu∂outjh

(5.13)SLijk+=SLijk−η×∂Etotal∂SLijk

where FEIu+ denotes the actual value of FEI_u_, ∂netjk denotes the input value of the _j_th node of the _k_th layer network, ∂outjk denotes the output value of the _j_th node of the _k_th layer network, and r is the number of FEI.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128189184000051

Low-frequency Oscillators

Brahim Haraoubia, in Nonlinear Electronics 1, 2018

2.2 Principle of sinusoidal feedback oscillator

Irrespective of its type, the block diagram of a sinusoidal feedback oscillator is shown in Figure 2.6. The feedback network is a passive dissipative oscillating circuit. Its role is to fix the oscillating frequency. Left on its own, this network (feedback network) cannot produce sustained oscillations.

Figure 2.6

Figure 2.6. Block diagram of a sinusoidal feedback oscillator

Thus, in order to produce and sustain an oscillation, this network must be supplied with external energy. This role can be ensured by an amplifier circuit, for example. The limiter circuit stabilizes the amplitude of the output signal at a well-established limit.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781785483004500026

An introduction to deep learning applications in biometric recognition

Akash Dhiman, ... Deepak Kumar Sharma, in Trends in Deep Learning Methodologies, 2021

2.1.2 Recurrent neural networks

Simple deep neural networks are considered as “feed-forward networks” in which input data is being mapped to output data. The architecture is made so that only the nodes in the _n_th layer are connected to the (n + 1)th nodes in a forward data propagation manner. An RNN modifies this architecture by adding a feedback path, thus allowing the properties of a feedback network to influence the learning process [18]. The feedback network in an abstract sense allows the deep neural network to learn the temporal dependency of the data being analyzed. The feedback network allows data from the same nodes at a previous point in time to influence its data in the present time; this is illustrated in Fig. 1.5. Their use case is in the analysis of time-dependent functions and/or unsegmented tasks. An RNN also has some well-known modifications that further amplify the memory aspect of the architecture. We have long short-term memory (LSTM) architecture that has special nodes to store temporal information and gates that control when to store information and when to forget it [19]. Another derivative is gated recurrent network, which is similar to LSTM but more efficient and less complex as it has fewer parameters than LSTM [19]. Their application in biometrics involves the analysis of behavioral biometrics such as signatures and gait recognition biometrics.

Figure 1.5. Working of a recurrent neural network. RNN, Recurrent neural network.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780128222263000015

Use of artificial neural networks (ANNs) in colour measurement

M. Senthilkumar, in Colour Measurement, 2010

5.3.3 Types of network

Feed-forward networks

A general feed-forward network is illustrated in Fig. 5.1. This is a feed-forward, fully connected hierarchical network consisting of an input layer, one or more middle or hidden layers and an output layer. The internal layers are called ‘hidden’ because they only receive internal inputs and produce internal outputs. This network allows signals to travel only from input to output. There is no feedback (loops), i.e. the output of any layer does not affect that same layer. Feedforward ANNs tend to be straightforward networks that associate inputs with outputs.

Feedback networks

Feedback networks can have signals travelling in both directions by introducing loops in the network. Networks of this type operate by allowing neighbouring neurons to adjust other nearby neurons either in a positive or negative direction. Feedback networks are changing continuously until they reach an equilibrium point, where they remain until the input changes and a new equilibrium needs to be found. Feedback architectures are also referred to as interactive or recurrent neural networks.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9781845695590500042

Transistor Circuits

Horace G. Jackson, in Reference Data for Engineers (Ninth Edition), 2002

Basic Feedback Circuit Topologies

Preliminary to the presentation of the feedback topologies, consider the circuit diagrams of four basic classes of amplifier (Fig. 21). The classification is based on the magnitude of the input and output impedance relative to the source and load impedance. The ideal characteristics for each class of amplifier are given in Table 13.

Fig. 21. Amplifier classifications.

TABLE 13. IDEAL AMPLIFIER CHARACTERISTICS

Parameter Voltage Amplifier Current Amplifier Trans-conductance Amplifier Trans-resistance Amplifier
Input resistance (ri) →∞(>>_R_1) →0(<<_R_1) →∞(<<_R_1) →0(<<_R_1)
Output resistance (ro) →0(<<RL) →∞(>>RL) →∞(>>RL) →0(<<RL)
Transfer characteristic (a) _av_= vo/vs _ai_= io/is _gm_= io/vs _rm_= vo/is

Note: Transconductance (gm) used here refers to an amplifier parameter, not necessarily just a device parameter.

A feedback amplifier is described in terms of the way in which the feedback network is connected to the basic amplifier. There are four basic feedback circuit topologies. These are illustrated in the block diagrams of Fig. 22. Clearly, in Fig. 22A the feedback network is connected in series with the input terminals of the basic amplifier and in shunt with the output terminals. Notice:

Fig. 22. Feedback-amplifier topologies.

With series feedback at the input, voltages _vs_and _Vf_are algebraically summed.

With shunt feedback at the input, currents _is_and _if_are algebraically summed.

With series feedback at the output, a current _io_is sampled.

With shunt feedback at the output, a voltage _vo_is sampled.

Included in Fig. 22are simple examples of each feedback connection, implemented with bipolar transistors. Especially note the correspondence between each circuit schematic and the related block diagram. To avoid complexity, all biasing resistors have been omitted from the circuit diagrams, but it is assumed that all transistors are biased in the forward-active region to yield a high-gain amplifier.

Presented below is a method of analysis of feedback amplifiers. The design of a feedback amplifier would follow a similar procedure.

Identify the feedback topology:

A.

Is feedback signal _sf_applied in series (vf) or in shunt (if) with the signal source ss?

B.

Is sampled signal _so_obtained at the output node (vo) or from the output loop (io)?

Draw the basic amplifier circuit with the feedback set to zero; that is

A.

For the correct input circuit:

(a)

With shunt sampling, short-circuit the output nodes to set _vo_= 0.

(b)

With series sampling, open-circuit the output loop to set _io_= 0.

B.

For the correct output circuit:

(a)

With series summing, open-circuit the input loop to set _vf_= 0.

(b)

With shunt summing, short-circuit the input nodes to set _if_= 0.

Indicate _sf_and so, and solve for the feedback factor (f = sf/so).

Evaluate the open-loop gain function (a).

From a and f, find T, Ds, A, Ri, and Ro.

Information to aid in the analysis and design of feedback amplifiers is summarized in Table 14. Notice that the effect of negative feedback is to modify the open-loop parameters of an amplifier so that the closed-loop performance approaches the ideal characteristics as listed in Table 13.

TABLE 14. FEEDBACK-AMPLIFIER ANALYSIS

Empty Cell Series-Shunt Shunt-Series Series-Series Shunt-Shunt
Input signal (si) vi ii vi ii
Feedback signal (sf) vf if vf if
Output Signal (so) vo io io vo
To calculate loading of feedback network
At input Short output node Open output loop Open output loop Short output node
At output Open input loop Open input loop Short input node Short input node
To calculate feedback factor Drive feedback network with a voltage and calculate open-circuit voltage Vf Drive feedback network with a current and calculate short-circuit current if Drive feedback network with a current and calculate open-circuit voltage vf Drive feedback network with a voltage and calculate short-circuit current if
Feedback factor (f) vf/vo if/io vf/io if/vo
Open-loop gain (a) _av_= vo/vi _ai_= io/ii _gm_= io/vi _rm_= vo/ii
Loop gain (T) avf aif gmf rmf
Closed-loop
Gain (A) _Av_= av/(1 + T) _Ai_= ai/(1 + T) Gm/(1 + T) _Rm_= rm/(1 + T)
Input resistance (Ri) ri(1 + T) ri/(1 + T) ri(1 + T) ri/(1 + T)
Output resistance (Ro) ro/(1 + T) ro(1 + T) ro(1 + T) ro/(1 + T)

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750672917500212

Intelligent Control Strategies

Zhao-Dong Xu, ... Fei-Hong Xu, in Intelligent Vibration Control in Civil Engineering Structures, 2017

2.10.1 Basic Principle

The neural network is proposed by studying the human information according to the modern neurobiology and understanding science. It can be seen as a network which is composed of a large number of interconnected neurons (processing unit). The neural work has the characteristics of strong adaptability, learning ability, nonlinear mapping ability, robustness, and fault tolerance.

The neurons can transmit information, and their models can be classified as three parts: threshold unit, linear unit, and nonlinear unit. Once the model of neuron is determined, the performance and ability of a neural network will mainly depend on the topological structure and the learning method. The topological structures of neural networks are usually divided as

Feedforward network. The neurons of the network exist layer by layer, and each neuron is only connected to other neurons in the former layer. The top layer is the output layer, and the number of the hidden layer can be one or more. The feedforward network is widely used in the practical application such as sensors.

Feedback network. The network itself is a forward model. The difference between the feedforward and the feedback model is that the feedback network has a feedback loop.

Self-organizing network. The lateral inhibition and exciting mechanism of the neurons between the same layer can be realized through the interconnection of the neurons, thus the neurons can be classified into several categories, and each category can action as an integral.

Interconnection network. The interconnection network includes two categories: local interconnection and global interconnection. The neuron is connected to all the other neurons, while some of the neurons are not connected in the local interconnection network. The Hopfield network and the Boltzmann network belong to the interconnection network.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780124058743000023

A survey on artificial neural networks application for identification and control in environmental engineering: Biological and chemical systems with uncertain models

Alexander Poznyak, ... Tatyana Poznyak, in Annual Reviews in Control, 2019

2.4 DNNs

DNNs, also known as Auto Associative or Feedback Networks, defines a class of ANNs, where the connections between units and output form a directed cycle. This creates an internal state of the network (x) yielding the exhibition of dynamic temporal-dependent behavior. Unlike SNNs, D NNs can use their internal memory to process arbitrary sequences of inputs. In DNN, the signal travels in both forward and back directions by introducing loops in the network architecture or topology. The main simple structures of DNN are:

Hopfield network. The Hopfield network is of historic interest although it is not a general DNN. Such DNN aims not to process sequences of patterns (see Fig. 2). Instead, Hopfield DNN requires stationary inputs. All the neurons connections in different layers are symmetric. Invented by Hopfield (1982), it guarantees that its outputs dynamics converge to the reference available output sets. If the connections are trained using Hebbian learning, then the Hopfield DNN represents a robust content-addressable memory, resistant to connection alteration. A variant of the Hopfield DNN is the bidirectional associative memory (BAM). The BAM has two layers, either of which can be driven as an input, to recall an association and produce an output on the other layer. Such DNN motivated the auto-encoder RNN structure.

Fig. 2

Fig. 2. Graphical representation of a regular multilayer recurrent variant of a feed-forward ANN.

Elman network. Jeff Elman proposed this basic DNN architecture. It represents one of the first well-defined multilayer DNN. This is a three-layer network (arranged horizontally as x, y, and z in Fig. 3), including a set of context units named m. There are connections from the middle (hidden) layer to these context units fixed with an unitary weight (Elman, 1993) At each time step, the input is propagated in a standard feed-forward form, and then a learning rule is applied. The fixed back connections forces the context units to always maintaining a copy of the previous values of the hidden units (since they propagate over the connections before the learning rule is applied). Thus the network can maintain a sort of state, allowing it to perform such tasks as sequence-prediction that are beyond the capacity of a standard multi-layer perceptron.

Fig. 3

Fig. 3. Graphical representation of an Elman class of recursive ANN.

The Jordan network. Jordan networks, proposed by Jordan (1996), are very much alike the Elman networks. The context units are however fed from the output layer instead of the hidden layer. Such feedback connections motivated the further development of DifNN. The context units in a Jordan network are also referred to as the state layer, and have a recurrent connection to themselves with no other nodes on this connection. The mathematical model of the Elman and Jordan networks (only some parameters are different) are governed by the following system of equations

(4)ht=σh(Whxt+Uhyt−1+bh)yt=σy(Wyht+by)}

where xt is input vector, ht is hidden layer vector, yt is output vector, Wh, Uh and bh, by are weighting matrices and vectors, σh and σy are activating vector functions with the widely used components. The terms W, U and b affect the dynamics of (4) in either linear or nonlinear forms. The effect of W and U can be identified by the nonlinear form of (4) using the concept of the Cauchy equation. On the other hand, the parameter b provides the bias of the activation functions which is endorsing the ability of approximating non-vanishing dynamics at the origin which are common in biological systems. These dynamical plants have equilibrium points outside the origin which are not known from the beginning. This condition introduces new open problems to adjust such bias on-line, which is a not a regular researching area in ANN. This paragraph is in the manuscript.

Read full article

URL:

https://www.sciencedirect.com/science/article/pii/S1367578819300094

Volume 1

R. Nayak, ... B.K.H. Ting, in Computational Mechanics–New Frontiers for the New Millennium, 2001

What is a Neural Network?

ANNs are a powerful general-purpose tool applied to many tasks where data relationships have to be learned or, decision process and predictions have to be modelled from examples. ANN methods determine the procedure for correctly predicting new unseen examples, if given the description of a set of examples (Browne 1997).

ANNs represent the computational paradigm that is based on the way biological nervous systems, such as the brain, process information. An ANN is a parallel distributed information processing structure consisting of processing elements interconnected via unidirectional signal channels called links.

An ANN consists of one or more layers of nodes configured in regular and highly connected topologies. The commonest type of ANN consists of three layers: an input layer (consists of input nodes), an output layer (consists of output nodes) and a hidden layer (consists of hidden nodes). Raw information is fed into the network via input nodes. The activities of input nodes along with the weights on links between input and hidden nodes determine outputs of hidden nodes. Behaviour of the output nodes depends on the activities of hidden nodes and the weights on links between hidden and output nodes.

A feedforward network allows signals to move from input to output nodes only. There is no feedback from output to input/hidden nodes or lateral connections among the same layer. A feedback network allows signals to travel in both directions by introducing loops in the network. For example in the recurrent model, outputs from hidden nodes feed back to some of the input nodes.

There are single-layer and multi-layer architectures. In single layer architectures, for example the Hopfield model (Browne 1997), a single layer of nodes forms the topology. The output from each node feed back to all of its neighbours. Whereas in multi layer architectures, several layers of nodes form the topology.

Neural networks have capability of transforming inputs into desired output changes; this is called neural network learning or training. These changes are generally produced by sequentially applying input values to the network while adjusting network weights. This is similar to learning in biological systems that involve adjustments to the synaptic connections that exist between the neurones. During the learning process, the network weights converge to values such that each input vector produces the desired output vector.

There are three major categories of learning: (1) supervised in which the network is provided the expected output and trained to respond correctly (2) unsupervised in which the network is provided with no knowledge beforehand of expected output and trained to discover structures in presented inputs (3) reinforcement in which the network is not provided with explicit output instead it is periodically given performance indicators.

Read full chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780080439815501322