A Recurrent Neural Network for Solving Bilevel Linear Programming Problem (original) (raw)
Related papers
Stability Analysis by Lyapunov Function in Neural Network
Journal of Mathematics and Informatics, 2017
This paper is connected with the stability analysis of Discrete-Time recurrent neural networks (RNNs) with time delays as random variable drawn from some probability distribution. By introducing the variation probability of the time delay, a common delayed Discrete-Time RNN system is transformed into one with stochastic parameters. Improved condition for the mean square stability of these systems are obtained by employing new Lyapunov function and more techniques are used to achieves delay dependence. The merit of the proposed condition lies in its reduced conservation which is made possible by considering not only the range of the time delays but also the variation probability distribution.
On the absolute stability for recurrent neural networks with time delays
IEEE International Carpathian Control Conference, ICCC 2012, Slovakia, pp. 97–102, 2012
In the paper there are considered two main aspects of the analysis for the dynamical properties of the recurrent neural networks with time-delays: the absolute stability and the global qualitative behavior of the system. The first aspect refers to the global asymptotic stability of the zero equilibrium and this means that only a single steady state of the neural network matters (the case of optimizers). The second aspect concerns global behavior of the systems with several equilibria. We have discussed the difficulties and the open problem concerning that second aspect.
IEEE Transactions on Neural Networks, 2002
This paper presents new theoretical results on the global exponential stability of recurrent neural networks with bounded activation functions and bounded time-varying delays in the presence of strong external stimuli. It is shown that the Cohen-Grossberg neural network is globally exponentially stable, if the absolute value of the input vector exceeds a criterion. As special cases, the Hopfield neural network and the cellular neural network are examined in detail. In addition, it is shown that criteria herein, if partially satisfied, can still be used in combination with existing stability conditions. Simulation results are also discussed in two illustrative examples.
2011 IEEE International Symposium on Computer-Aided Control System Design (CACSD), 2011
A framework and stability conditions are presented for the analysis of stability of three different classes of dynamic artificial neural networks: (1) neural state space models, (2) global input-output models, and (3) dynamic recurrent neural networks. The models are transformed into a standard nonlinear operator form for which linear matrix inequality-based stability analysis is applied. Theory and numerical examples are used to draw connections and make comparisons to stability conditions reported in the literature for dynamic artificial neural networks.
Recurrent neural dynamic models for equilibrium and eigenvalue problems
Mathematical and Computer Modelling, 2002
... 240 S. RAJASEKARAN AND GA VIJAYALAKSHMI PAI 3. JJ Hopfield and DW Tank, Simple neural optimization networks: An AD converter, signal decision circuit ... 7. NK Bose and P. Liang, Neural Network Fundamentals, Graphs Algorithms and Applications, McGrawHill, (1996). ...
1995
Su cient conditions for global asymptotic stability of discrete time multilayer recurrent neural networks are derived in this paper. Both the autonomous and non-autonomous case are treated. Multilayer recurrent neural networks are interpreted as so-called NLq systems, which are nonlinear systems consisting of an alternating sequence of linear and static nonlinear operators that satisfy a sector condition (q`layers'). It turns out that many problems arising in recurrent neural networks and system and control theory can be interpreted as NLq systems, such as multilayer Hop eld nets, locally recurrent globally feedforward networks, generalized cellular neural networks, neural state space control systems, the Lur'e problem, linear fractional transformations with real diagonal uncertainty block, digital lters with over ow characteristic etc. In this paper we discuss applications of the theorems for designing neural state space control systems (emulator approach). Narendra's dynamic backpropagation procedure is modi ed in order to assess closed loop stability. The new theory also enables to consider reference inputs belonging to the class of functions l2 instead of speci c reference inputs.
Stability analysis of recurrent neural networks with piecewise constant argument of generalized type
In this paper, we apply the method of Lyapunov functions for differential equations with piecewise constant argument of generalized type to a model of recurrent neural networks (RNNs). The model involves both advanced and delayed arguments. Sufficient conditions are obtained for global exponential stability of the equilibrium point. Examples with numerical simulations are presented to illustrate the results.
IEEE Transactions on Signal Processing, 1997
It is known that many discrete time recurrent neural networks, such as e.g. neural state space models, multilayer Hop eld networks and locally recurrent globally feedforward neural networks, can be represented as NL q systems. Su cient conditions for global asymptotic stability and input/output stability of NL q systems are available, including three types of criteria: diagonal scaling and criteria depending on diagonal dominance and condition number factors of certain matrices. In this paper, it is discussed how Narendra's dynamic backpropagation procedure, used for identifying recurrent neural network from I/O measurements, can be modi ed with an NL q stability constraint in order to ensure globally asymptotically stable identi ed models. An example illustrates how system identi cation of an internally stable model, corrupted by process noise, may lead to unwanted limit cycle behaviour and how this problem can be avoided by adding the stability constraint.
Global asymptotic stability of a general class of recurrent neural networks with time-varying delays
IEEE Transactions on Circuits and Systems I-regular Papers, 2003
This paper investigates the absolute exponential stability of a general class of delayed neural networks, which require the activation functions to be partially Lipschitz continuous and monotone nondecreasing only, but not necessarily differentiable or bounded. Three new sufficient conditions are derived to ascertain whether or not the equilibrium points of the delayed neural networks with additively diagonally stable interconnection matrices are absolutely exponentially stable by using delay Halanay-type inequality and Lyapunov function. The stability criteria are also suitable for delayed optimization neural networks and delayed cellular neural networks whose activation functions are often nondifferentiable or unbounded. The results herein answer a question: if a neural network without any delay is absolutely exponentially stable, then under what additional conditions, the neural networks with delay is also absolutely exponentially stable. q