Dynamical system parameter identification using deep recurrent cell networks (original) (raw)

Identification of linear and nonlinear dynamic systems using recurrent neural networks

Artificial Intelligence in Engineering, 1993

This paper describes the use of Eiman-type recurrent neural networks to identify dynamic systems. Networks as originally designed by Elman (Cognitive Sci., 1990, 14, 179-211) and also those in which self-connections are made to the context units were employed to identify a variety of linear and nonlinear systems. It was found that the latter networks were more versatile than the basic Elman nets in being able to model the dynamic behaviour of high order linear and nonlinear systems.

System Identification Using Recurrent Neural Network

A system identification problem can be formulated as an optimization task where the objective is to find a model and a set of parameters that minimize the prediction error between the measured data and the model output. The most existing system identification approaches are highly analytical and based on mathematical derivation of the system’s model. System identification is one of the most interesting applications for adaptive algorithms. We have proposed a recurrent neural network (RNN) based adaptive algorithm, due to its robustness and calculus simplicity. Based on the error signal, the filter’s coefficients are updated and corrected, in order to adapt, so the output signal has the same values as the reference signal. The proposed method is suitable for non-linear system identification.

Nonlinear system identification using recurrent networks

Neural Networks, 1991 …, 1991

... The gantry crane system was chosen as the model for a simple non-linear time-varying system. ... Two major problems of interest - ie, system identification and inverse system identification - were successfully solved using both recurrent and feedforward networks. ...

Nonlinear complex dynamic system identification based on a novel recurrent neural network

In this paper, a novel Modified Jordan Recurrent Neural Network (MJRNN) model is presented to identify complex nonlinear dynamical systems. The nonlinear dynamic system identification using artificial neural networks is the most commonly used method in control system engineering, due to their capabilities. The structure of the presented model is an extended version of the original Jordan recurrent neural network model. The parameter update equations are obtained by using the back-propagation optimization algorithm, which is the most frequently used method as a learning approach for the training of the proposed model's parameters. The effectiveness of the suggested neural network is evaluated in comparison to other neural networks model such as Jordan recurrent neural network(JRNN), Elman recurrent neural Network(ERNN), Diagonal recurrent neural network(DRNN), and Feedforward neural network(FFNN) model. The robustness of the proposed model is also tested with parameter variation ...

Deep transfer learning for system identification using long short-term memory neural networks

arXiv (Cornell University), 2022

Recurrent neural networks (RNNs) have many advantages over more traditional system identification techniques. They may be applied to linear and nonlinear systems, and they require fewer modeling assumptions. However, these neural network models may also need larger amounts of data to learn and generalize. Furthermore, neural networks training is a time-consuming process. Hence, building upon long-short term memory neural networks (LSTM), this paper proposes using two types of deep transfer learning, namely parameter fine-tuning and freezing, to reduce the data and computation requirements for system identification. We apply these techniques to identify two dynamical systems, namely a second-order linear system and a Wiener-Hammerstein nonlinear system. Results show that compared with direct learning, our method accelerates learning by 10% to 50%, which also saves data and computing resources.

Deep learning with transfer functions: new applications in system identification

IFAC-PapersOnLine, 2021

This paper presents a linear dynamical operator described in terms of a rational transfer function, endowed with a well-defined and efficient back-propagation behavior for automatic derivatives computation. The operator enables end-to-end training of structured networks containing linear transfer functions and other differentiable units by exploiting standard deep learning software. Two relevant applications of the operator in system identification are presented. The first one consists in the integration of prediction error methods in deep learning. The dynamical operator is included as the last layer of a neural network in order to obtain the optimal one-step-ahead prediction error. The second one considers identification of general block-oriented models from quantized data. These block-oriented models are constructed by combining linear dynamical operators with static nonlinearities described as standard feed-forward neural networks. A custom loss function corresponding to the log-likelihood of quantized output observations is defined. For gradient-based optimization, the derivatives of the log-likelihood are computed by applying the back-propagation algorithm through the whole network. Two system identification benchmarks are used to show the effectiveness of the proposed methodologies.

A novel Deep Neural Network architecture for non-linear system identification

IFAC-PapersOnLine, 2021

We present a novel Deep Neural Network (DNN) architecture for non-linear system identification. We foster generalization by constraining DNN representational power. To do so, inspired by fading memory systems, we introduce inductive bias (on the architecture) and regularization (on the loss function). This architecture allows for automatic complexity selection based solely on available data, in this way the number of hyper-parameters that must be chosen by the user is reduced. Exploiting the highly parallelizable DNN framework (based on Stochastic optimization methods) we successfully apply our method to large scale datasets.

On the adaptation of recurrent neural networks for system identification

2022

This paper presents a transfer learning approach which enables fast and efficient adaptation of Recurrent Neural Network (RNN) models of dynamical systems. A nominal RNN model is first identified using available measurements. The system dynamics are then assumed to change, leading to an unacceptable degradation of the nominal model performance on the perturbed system. To cope with the mismatch, the model is augmented with an additive correction term trained on fresh data from the new dynamic regime. The correction term is learned through a Jacobian Feature Regression (JFR) method defined in terms of the features spanned by the model's Jacobian with respect to its nominal parameters. A non-parametric view of the approach is also proposed, which extends recent work on Gaussian Process (GP) with Neural Tangent Kernel (NTK-GP) to the RNN case (RNTK-GP). This can be more efficient for very large networks or when only few data points are available. Implementation aspects for fast and ...