LSTMLayer - Long short-term memory (LSTM) layer for recurrent neural network
(RNN) - MATLAB ([original](https://in.mathworks.com/help/deeplearning/ref/nnet.cnn.layer.lstmlayer.html)) ([raw](?raw))
Long short-term memory (LSTM) layer for recurrent neural network (RNN)
Description
An LSTM layer is an RNN layer that learns long-term dependencies between time steps in time-series and sequence data.
The layer performs additive interactions, which can help improve gradient flow over long sequences during training.
Creation
Syntax
Description
Properties
LSTM
Number of hidden units (also known as the hidden size), specified as a positive integer.
The number of hidden units corresponds to the amount of information that the layer remembers between time steps (the hidden state). The hidden state can contain information from all the previous time steps, regardless of the sequence length. If the number of hidden units is too large, then the layer can overfit to the training data. The hidden state does not limit the number of time steps that the layer processes in an iteration.
The layer outputs data with NumHiddenUnits
channels.
To set this property, use the numHiddenUnits
argument when you create the LSTMLayer
object. After you create aLSTMLayer
object, this property is read-only.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Output mode, specified as one of these values:
"sequence"
— Output the complete sequence."last"
— Output the last time step of the sequence.
The LSTMLayer
object stores this property as a character vector.
To set this property, use the corresponding name-value argument when you create the LSTMLayer
object. After you create a LSTMLayer
object, this property is read-only.
This property is read-only.
Flag for state inputs to the layer, specified as 0
(false
) or 1
(true
).
If the HasStateInputs
property is 0
(false
), then the layer has one input with the name"in"
, which corresponds to the input data. In this case, the layer uses the HiddenState
and CellState
properties for the layer operation.
If the HasStateInputs
property is 1
(true
), then the layer has three inputs with the names"in"
, "hidden"
, and "cell"
, which correspond to the input data, hidden state, and cell state, respectively. In this case, the layer uses the values passed to these inputs for the layer operation. If HasStateInputs
is 1
(true
), then the HiddenState
andCellState
properties must be empty.
This property is read-only.
Flag for state outputs from the layer, specified as0
(false
) or1
(true
).
If the HasStateOutputs
property is 0
(false
), then the layer has one output with the name"out"
, which corresponds to the output data.
If the HasStateOutputs
property is 1
(true
), then the layer has three outputs with the names"out"
, "hidden"
, and"cell"
, which correspond to the output data, hidden state, and cell state, respectively. In this case, the layer also outputs the state values that it computes.
This property is read-only.
Input size, specified as a positive integer or "auto"
. IfInputSize
is "auto"
, then the software automatically assigns the input size at training time.
If InputSize
is "auto"
, then theLSTMLayer
object stores this property as a character vector.
Data Types: double
| char
| string
Activations
This property is read-only.
Activation function to update the cell and hidden state, specified as one of these values:
"tanh"
— Use the hyperbolic tangent function (tanh)."softsign"
— Use the softsign function softsign(x)=x1+|x|."relu"
(since R2024a) — Use the rectified linear unit (ReLU) function ReLU(x)={x,x>00,x≤0.
The software uses this option as the function σc in the calculations to update the cell and hidden state.
For more information on how an LSTM layer uses activation functions, see Long Short-Term Memory Layer.
The LSTMLayer
object stores this property as a character vector.
Activation function to apply to the gates, specified as one of these values:
"sigmoid"
— Use the sigmoid function, σ(x)=(1+e−x)−1."hard-sigmoid"
— Use the hard sigmoid function,
The software uses this option as the function σg in the calculations for the layer gates.
The LSTMLayer
object stores this property as a character vector.
To set this property, use the corresponding name-value argument when you create the LSTMLayer
object. After you create a LSTMLayer
object, this property is read-only.
State
Cell state to use in the layer operation, specified as a NumHiddenUnits
-by-1 numeric vector. This value corresponds to the initial cell state when data is passed to the layer.
After you set this property manually, calls to the resetState function set the cell state to this value.
If HasStateInputs
is 1
(true
), then the CellState
property must be empty.
Data Types: single
| double
Hidden state to use in the layer operation, specified as aNumHiddenUnits
-by-1 numeric vector. This value corresponds to the initial hidden state when data is passed to the layer.
After you set this property manually, calls to the resetState function set the hidden state to this value.
If HasStateInputs
is 1
(true
), then the HiddenState
property must be empty.
Data Types: single
| double
Parameters and Initialization
Function to initialize the input weights, specified as one of the following:
"glorot"
– Initialize the input weights with the Glorot initializer [2] (also known as Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and variance2/(InputSize + numOut)
, wherenumOut = 4*NumHiddenUnits
."he"
– Initialize the input weights with the He initializer [3]. The He initializer samples from a normal distribution with zero mean and variance2/InputSize
."orthogonal"
– Initialize the input weights with Q, the orthogonal matrix given by the QR decomposition of Z =Q R for a random matrix Z sampled from a unit normal distribution. [4]"narrow-normal"
– Initialize the input weights by independently sampling from a normal distribution with zero mean and standard deviation 0.01."zeros"
– Initialize the input weights with zeros."ones"
– Initialize the input weights with ones.- Function handle – Initialize the input weights with a custom function. If you specify a function handle, then the function must be of the form
weights = func(sz)
, wheresz
is the size of the input weights.
The layer only initializes the input weights when theInputWeights
property is empty.
The LSTMLayer
object stores this property as a character vector or a function handle.
Data Types: char
| string
| function_handle
Function to initialize the recurrent weights, specified as one of the following:
"orthogonal"
– Initialize the recurrent weights with Q, the orthogonal matrix given by the QR decomposition of Z =Q R for a random matrix Z sampled from a unit normal distribution. [4]"glorot"
– Initialize the recurrent weights with the Glorot initializer [2] (also known as Xavier initializer). The Glorot initializer independently samples from a uniform distribution with zero mean and variance2/(numIn + numOut)
, wherenumIn = NumHiddenUnits
andnumOut = 4*NumHiddenUnits
."he"
– Initialize the recurrent weights with the He initializer [3]. The He initializer samples from a normal distribution with zero mean and variance2/NumHiddenUnits
."narrow-normal"
– Initialize the recurrent weights by independently sampling from a normal distribution with zero mean and standard deviation 0.01."zeros"
– Initialize the recurrent weights with zeros."ones"
– Initialize the recurrent weights with ones.- Function handle – Initialize the recurrent weights with a custom function. If you specify a function handle, then the function must be of the form
weights = func(sz)
, wheresz
is the size of the recurrent weights.
The layer only initializes the recurrent weights when theRecurrentWeights
property is empty.
The LSTMLayer
object stores this property as a character vector or a function handle.
Data Types: char
| string
| function_handle
Function to initialize the bias, specified as one of these values:
"unit-forget-gate"
— Initialize the forget gate bias with ones and the remaining biases with zeros."narrow-normal"
— Initialize the bias by independently sampling from a normal distribution with zero mean and a standard deviation of 0.01."ones"
— Initialize the bias with ones.- Function handle — Initialize the bias with a custom function. If you specify a function handle, then the function must be of the form
bias = func(sz)
, wheresz
is the size of the bias.
The layer only initializes the bias when the Bias
property is empty.
The LSTMLayer
object stores this property as a character vector or a function handle.
Data Types: char
| string
| function_handle
Input weights, specified as a matrix.
The input weight matrix is a concatenation of the four input weight matrices for the components (gates) in the LSTM layer. The four matrices are concatenated vertically in the following order:
- Input gate
- Forget gate
- Cell candidate
- Output gate
The input weights are learnable parameters. When you train a neural network using the trainnet function, if InputWeights
is nonempty, then the software uses theInputWeights
property as the initial value. If InputWeights
is empty, then the software uses the initializer specified by InputWeightsInitializer
.
At training time, InputWeights
is a4*NumHiddenUnits
-by-InputSize
matrix.
Recurrent weights, specified as a matrix.
The recurrent weight matrix is a concatenation of the four recurrent weight matrices for the components (gates) in the LSTM layer. The four matrices are vertically concatenated in the following order:
- Input gate
- Forget gate
- Cell candidate
- Output gate
The recurrent weights are learnable parameters. When you train an RNN using the trainnet function, if RecurrentWeights
is nonempty, then the software uses theRecurrentWeights
property as the initial value. IfRecurrentWeights
is empty, then the software uses the initializer specified by RecurrentWeightsInitializer
.
At training time RecurrentWeights
is a4*NumHiddenUnits
-by-NumHiddenUnits
matrix.
Layer biases, specified as a numeric vector.
The bias vector is a concatenation of the four bias vectors for the components (gates) in the layer. The layer vertically concatenates the four vectors in this order:
- Input gate
- Forget gate
- Cell candidate
- Output gate
The layer biases are learnable parameters. When you train a neural network, if Bias
is nonempty, then the trainnet function uses the Bias
property as the initial value. IfBias
is empty, then software uses the initializer specified by BiasInitializer
.
At training time, Bias
is a4*NumHiddenUnits
-by-1 numeric vector.
Learning Rate and Regularization
Learning rate factor for the input weights, specified as a nonnegative scalar or a 1-by-4 numeric vector.
The software multiplies this factor by the global learning rate to determine the learning rate factor for the input weights of the layer. For example, ifInputWeightsLearnRateFactor
is 2
, then the learning rate factor for the input weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify with thetrainingOptions function.
To control the value of the learning rate factor for the four individual matrices inInputWeights
, specify a 1-by-4 vector. The entries of InputWeightsLearnRateFactor
correspond to the learning rate factor of these components:
- Input gate
- Forget gate
- Cell candidate
- Output gate
To specify the same value for all the matrices, specify a nonnegative scalar.
Example: 2
Example: [1 2 1 1]
Learning rate factor for the recurrent weights, specified as a nonnegative scalar or a 1-by-4 numeric vector.
The software multiplies this factor by the global learning rate to determine the learning rate for the recurrent weights of the layer. For example, ifRecurrentWeightsLearnRateFactor
is 2
, then the learning rate for the recurrent weights of the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using thetrainingOptions function.
To control the value of the learning rate factor for the four individual matrices inRecurrentWeights
, specify a 1-by-4 vector. The entries of RecurrentWeightsLearnRateFactor
correspond to the learning rate factor of these components:
- Input gate
- Forget gate
- Cell candidate
- Output gate
To specify the same value for all the matrices, specify a nonnegative scalar.
Example: 2
Example: [1 2 1 1]
Learning rate factor for the biases, specified as a nonnegative scalar or a 1-by-4 numeric vector.
The software multiplies this factor by the global learning rate to determine the learning rate for the biases in this layer. For example, if BiasLearnRateFactor
is 2
, then the learning rate for the biases in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using the trainingOptions function.
To control the value of the learning rate factor for the four individual vectors inBias
, specify a 1-by-4 vector. The entries ofBiasLearnRateFactor
correspond to the learning rate factor of these components:
- Input gate
- Forget gate
- Cell candidate
- Output gate
To specify the same value for all the vectors, specify a nonnegative scalar.
Example: 2
Example: [1 2 1 1]
L2 regularization factor for the input weights, specified as a nonnegative scalar or a 1-by-4 numeric vector.
The software multiplies this factor by the global_L2_ regularization factor to determine the_L2_ regularization factor for the input weights of the layer. For example, if InputWeightsL2Factor
is 2
, then the L2 regularization factor for the input weights of the layer is twice the current global L2 regularization factor. The software determines the L2 regularization factor based on the settings you specify using the trainingOptions function.
To control the value of the L2 regularization factor for the four individual matrices inInputWeights
, specify a 1-by-4 vector. The entries of InputWeightsL2Factor
correspond to the_L2_ regularization factor of these components:
- Input gate
- Forget gate
- Cell candidate
- Output gate
To specify the same value for all the matrices, specify a nonnegative scalar.
Example: 2
Example: [1 2 1 1]
L2 regularization factor for the recurrent weights, specified as a nonnegative scalar or a 1-by-4 numeric vector.
The software multiplies this factor by the global_L2_ regularization factor to determine the_L2_ regularization factor for the recurrent weights of the layer. For example, if RecurrentWeightsL2Factor
is2
, then the L2 regularization factor for the recurrent weights of the layer is twice the current global_L2_ regularization factor. The software determines the L2 regularization factor based on the settings you specify using the trainingOptions function.
To control the value of the L2 regularization factor for the four individual matrices inRecurrentWeights
, specify a 1-by-4 vector. The entries of RecurrentWeightsL2Factor
correspond to the_L2_ regularization factor of these components:
- Input gate
- Forget gate
- Cell candidate
- Output gate
To specify the same value for all the matrices, specify a nonnegative scalar.
Example: 2
Example: [1 2 1 1]
L2 regularization factor for the biases, specified as a nonnegative scalar or a 1-by-4 numeric vector.
The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization for the biases in this layer. For example, if BiasL2Factor
is 2
, then the L2 regularization for the biases in this layer is twice the global L2 regularization factor. The software determines the global L2 regularization factor based on the settings you specify using the trainingOptions function.
To control the value of the L2 regularization factor for the four individual vectors in Bias
, specify a 1-by-4 vector. The entries of BiasL2Factor
correspond to the L2 regularization factor of these components:
- Input gate
- Forget gate
- Cell candidate
- Output gate
To specify the same value for all the vectors, specify a nonnegative scalar.
Example: 2
Example: [1 2 1 1]
Layer
Data Types: char
| string
This property is read-only.
Number of inputs to the layer.
If the HasStateInputs
property is 0
(false
), then the layer has one input with the name"in"
, which corresponds to the input data. In this case, the layer uses the HiddenState
and CellState
properties for the layer operation.
If the HasStateInputs
property is 1
(true
), then the layer has three inputs with the names"in"
, "hidden"
, and "cell"
, which correspond to the input data, hidden state, and cell state, respectively. In this case, the layer uses the values passed to these inputs for the layer operation. If HasStateInputs
is 1
(true
), then the HiddenState
andCellState
properties must be empty.
Data Types: double
This property is read-only.
Input names of the layer.
If the HasStateInputs
property is 0
(false
), then the layer has one input with the name"in"
, which corresponds to the input data. In this case, the layer uses the HiddenState
and CellState
properties for the layer operation.
If the HasStateInputs
property is 1
(true
), then the layer has three inputs with the names"in"
, "hidden"
, and "cell"
, which correspond to the input data, hidden state, and cell state, respectively. In this case, the layer uses the values passed to these inputs for the layer operation. If HasStateInputs
is 1
(true
), then the HiddenState
andCellState
properties must be empty.
The LSTMLayer
object stores this property as a cell array of character vectors.
This property is read-only.
Number of outputs to the layer.
If the HasStateOutputs
property is 0
(false
), then the layer has one output with the name"out"
, which corresponds to the output data.
If the HasStateOutputs
property is 1
(true
), then the layer has three outputs with the names"out"
, "hidden"
, and"cell"
, which correspond to the output data, hidden state, and cell state, respectively. In this case, the layer also outputs the state values that it computes.
Data Types: double
This property is read-only.
Output names of the layer.
If the HasStateOutputs
property is 0
(false
), then the layer has one output with the name"out"
, which corresponds to the output data.
If the HasStateOutputs
property is 1
(true
), then the layer has three outputs with the names"out"
, "hidden"
, and"cell"
, which correspond to the output data, hidden state, and cell state, respectively. In this case, the layer also outputs the state values that it computes.
The LSTMLayer
object stores this property as a cell array of character vectors.
Examples
Create an LSTM layer with the name "lstm1"
and 100 hidden units.
layer = lstmLayer(100,Name="lstm1")
layer = LSTMLayer with properties:
Name: 'lstm1'
InputNames: {'in'}
OutputNames: {'out'}
NumInputs: 1
NumOutputs: 1
HasStateInputs: 0
HasStateOutputs: 0
Hyperparameters InputSize: 'auto' NumHiddenUnits: 100 OutputMode: 'sequence' StateActivationFunction: 'tanh' GateActivationFunction: 'sigmoid'
Learnable Parameters InputWeights: [] RecurrentWeights: [] Bias: []
State Parameters HiddenState: [] CellState: []
Show all properties
Include an LSTM layer in a Layer
array.
inputSize = 12; numHiddenUnits = 100; numClasses = 9;
layers = [ ... sequenceInputLayer(inputSize) lstmLayer(numHiddenUnits) fullyConnectedLayer(numClasses) softmaxLayer]
layers = 4×1 Layer array with layers:
1 '' Sequence Input Sequence input with 12 dimensions
2 '' LSTM LSTM with 100 hidden units
3 '' Fully Connected 9 fully connected layer
4 '' Softmax softmax
Train a deep learning LSTM network for sequence-to-label classification.
Load the example data from WaveformData.mat
. The data is a numObservations
-by-1 cell array of sequences, where numObservations
is the number of sequences. Each sequence is a numTimeSteps
-by-numChannels
numeric array, where numTimeSteps
is the number of time steps of the sequence and numChannels
is the number of channels of the sequence.
Visualize some of the sequences in a plot.
numChannels = size(data{1},2);
idx = [3 4 5 12]; figure tiledlayout(2,2) for i = 1:4 nexttile stackedplot(data{idx(i)},DisplayLabels="Channel "+string(1:numChannels))
xlabel("Time Step")
title("Class: " + string(labels(idx(i))))
end
View the class names.
classNames = categories(labels)
classNames = 4×1 cell {'Sawtooth'} {'Sine' } {'Square' } {'Triangle'}
Set aside data for testing. Partition the data into a training set containing 90% of the data and a test set containing the remaining 10% of the data. To partition the data, use the trainingPartitions
function, attached to this example as a supporting file. To access this file, open the example as a live script.
numObservations = numel(data); [idxTrain,idxTest] = trainingPartitions(numObservations, [0.9 0.1]); XTrain = data(idxTrain); TTrain = labels(idxTrain);
XTest = data(idxTest); TTest = labels(idxTest);
Define the LSTM network architecture. Specify the input size as the number of channels of the input data. Specify an LSTM layer to have 120 hidden units and to output the last element of the sequence. Finally, include a fully connected with an output size that matches the number of classes, followed by a softmax layer.
numHiddenUnits = 120; numClasses = numel(categories(TTrain));
layers = [ ... sequenceInputLayer(numChannels) lstmLayer(numHiddenUnits,OutputMode="last") fullyConnectedLayer(numClasses) softmaxLayer]
layers = 4×1 Layer array with layers:
1 '' Sequence Input Sequence input with 3 dimensions
2 '' LSTM LSTM with 120 hidden units
3 '' Fully Connected 4 fully connected layer
4 '' Softmax softmax
Specify the training options. Train using the Adam solver with a learn rate of 0.01 and a gradient threshold of 1. Set the maximum number of epochs to 200 and shuffle every epoch. The software, by default, trains on a GPU if one is available. Using a GPU requires Parallel Computing Toolbox and a supported GPU device. For information on supported devices, see GPU Computing Requirements (Parallel Computing Toolbox).
options = trainingOptions("adam", ... MaxEpochs=200, ... InitialLearnRate=0.01,... Shuffle="every-epoch", ... GradientThreshold=1, ... Verbose=false, ... Metrics="accuracy", ... Plots="training-progress");
Train the LSTM network using the trainnet function. For classification, use cross-entropy loss.
net = trainnet(XTrain,TTrain,layers,"crossentropy",options);
Classify the test data. Specify the same mini-batch size used for training.
scores = minibatchpredict(net,XTest); YTest = scores2label(scores,classNames);
Calculate the classification accuracy of the predictions.
acc = mean(YTest == TTest)
Display the classification results in a confusion chart.
figure confusionchart(TTest,YTest)
To create an LSTM network for sequence-to-label classification, create a layer array containing a sequence input layer, an LSTM layer, a fully connected layer, and a softmax layer.
Set the size of the sequence input layer to the number of features of the input data. Set the size of the fully connected layer to the number of classes. You do not need to specify the sequence length.
For the LSTM layer, specify the number of hidden units and the output mode "last"
.
numFeatures = 12; numHiddenUnits = 100; numClasses = 9; layers = [ ... sequenceInputLayer(numFeatures) lstmLayer(numHiddenUnits,OutputMode="last") fullyConnectedLayer(numClasses) softmaxLayer];
For an example showing how to train an LSTM network for sequence-to-label classification and classify new data, see Sequence Classification Using Deep Learning.
To create an LSTM network for sequence-to-sequence classification, use the same architecture as for sequence-to-label classification, but set the output mode of the LSTM layer to "sequence"
.
numFeatures = 12; numHiddenUnits = 100; numClasses = 9; layers = [ ... sequenceInputLayer(numFeatures) lstmLayer(numHiddenUnits,OutputMode="sequence") fullyConnectedLayer(numClasses) softmaxLayer];
To create an LSTM network for sequence-to-one regression, create a layer array containing a sequence input layer, an LSTM layer, and a fully connected layer.
Set the size of the sequence input layer to the number of features of the input data. Set the size of the fully connected layer to the number of responses. You do not need to specify the sequence length.
For the LSTM layer, specify the number of hidden units and the output mode "last"
.
numFeatures = 12; numHiddenUnits = 125; numResponses = 1;
layers = [ ... sequenceInputLayer(numFeatures) lstmLayer(numHiddenUnits,OutputMode="last") fullyConnectedLayer(numResponses)];
To create an LSTM network for sequence-to-sequence regression, use the same architecture as for sequence-to-one regression, but set the output mode of the LSTM layer to "sequence"
.
numFeatures = 12; numHiddenUnits = 125; numResponses = 1;
layers = [ ... sequenceInputLayer(numFeatures) lstmLayer(numHiddenUnits,OutputMode="sequence") fullyConnectedLayer(numResponses)];
For an example showing how to train an LSTM network for sequence-to-sequence regression and predict on new data, see Sequence-to-Sequence Regression Using Deep Learning.
You can make LSTM networks deeper by inserting extra LSTM layers with the output mode "sequence"
before the LSTM layer. To prevent overfitting, you can insert dropout layers after the LSTM layers.
For sequence-to-label classification networks, the output mode of the last LSTM layer must be "last"
.
numFeatures = 12; numHiddenUnits1 = 125; numHiddenUnits2 = 100; numClasses = 9; layers = [ ... sequenceInputLayer(numFeatures) lstmLayer(numHiddenUnits1,OutputMode="sequence") dropoutLayer(0.2) lstmLayer(numHiddenUnits2,OutputMode="last") dropoutLayer(0.2) fullyConnectedLayer(numClasses) softmaxLayer];
For sequence-to-sequence classification networks, the output mode of the last LSTM layer must be "sequence"
.
numFeatures = 12; numHiddenUnits1 = 125; numHiddenUnits2 = 100; numClasses = 9; layers = [ ... sequenceInputLayer(numFeatures) lstmLayer(numHiddenUnits1,OutputMode="sequence") dropoutLayer(0.2) lstmLayer(numHiddenUnits2,OutputMode="sequence") dropoutLayer(0.2) fullyConnectedLayer(numClasses) softmaxLayer];
Algorithms
An LSTM layer is an RNN layer that learns long-term dependencies between time steps in time-series and sequence data.
The state of the layer consists of the hidden state (also known as the_output state_) and the cell state. The hidden state at time step t contains the output of the LSTM layer for this time step. The cell state contains information learned from the previous time steps. At each time step, the layer adds information to or removes information from the cell state. The layer controls these updates using gates.
These components control the cell state and hidden state of the layer.
Component | Purpose |
---|---|
Input gate (i) | Control level of cell state update |
Forget gate (f) | Control level of cell state reset (forget) |
Cell candidate (g) | Add information to cell state |
Output gate (o) | Control level of cell state added to hidden state |
This diagram illustrates the flow of data at time step t. This diagram shows how the gates forget, update, and output the cell and hidden states.
The learnable weights of an LSTM layer are the input weights W (InputWeights
), the recurrent weights R (RecurrentWeights
), and the bias b (Bias
). The matrices W, R, and b are concatenations of the input weights, the recurrent weights, and the bias of each component, respectively. The layer concatenates the matrices according to these equations:
where i, f, g, and_o_ denote the input gate, forget gate, cell candidate, and output gate, respectively.
The cell state at time step t is given by
where ⊙ denotes the Hadamard product (element-wise multiplication of vectors).
The hidden state at time step t is given by
where σc denotes the state activation function. By default, thelstmLayer
function uses the hyperbolic tangent function (tanh) to compute the state activation function.
These formulas describe the components at time step t.
Component | Formula |
---|---|
Input gate | it=σg(Wixt+Riht−1+bi) |
Forget gate | ft=σg(Wfxt+Rfht−1+bf) |
Cell candidate | gt=σc(Wgxt+Rght−1+bg) |
Output gate | ot=σg(Woxt+Roht−1+bo) |
In these calculations, σg denotes the gate activation function. By default, thelstmLayer
function, uses the sigmoid function, given by σ(x)=(1+e−x)−1, to compute the gate activation function.
Layers in a layer array or layer graph pass data to subsequent layers as formatted dlarray objects. The format of a dlarray
object is a string of characters in which each character describes the corresponding dimension of the data. The format consists of one or more of these characters:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, you can describe 2-D image data that is represented as a 4-D array, where the first two dimensions correspond to the spatial dimensions of the images, the third dimension corresponds to the channels of the images, and the fourth dimension corresponds to the batch dimension, as having the format "SSCB"
(spatial, spatial, channel, batch).
You can interact with these dlarray
objects in automatic differentiation workflows, such as those for developing a custom layer, using a functionLayer object, or using the forward and predict functions withdlnetwork
objects.
This table shows the supported input formats of LSTMLayer
objects and the corresponding output format. If the software passes the output of the layer to a custom layer that does not inherit from the nnet.layer.Formattable
class, or aFunctionLayer
object with the Formattable
property set to 0
(false
), then the layer receives an unformatted dlarray
object with dimensions ordered according to the formats in this table. The formats listed here are only a subset. The layer may support additional formats such as formats with additional "S"
(spatial) or"U"
(unspecified) dimensions.
Input Format | OutputMode | Output Format |
---|---|---|
"CB" (channel, batch) | "sequence" | "CB" (channel, batch) |
"last" | ||
"CBT" (channel, batch, time) | "sequence" | "CBT" (channel, batch, time) |
"last" | "CB" (channel, batch) | |
"SB" (spatial, batch) | "sequence" | "CB" (channel, batch) |
"last" |
In dlnetwork
objects, LSTMLayer
objects also support these input and output format combinations.
Input Format | OutputMode | Output Format |
---|---|---|
"SCB" (spatial, channel, batch) | "sequence" | "CB" (channel, batch) |
"last" | ||
"SSCB" (spatial, spatial, channel, batch) | "sequence" | |
"last" | ||
"SSSCB" (spatial, spatial, spatial, channel, batch) | "sequence" | |
"last" | ||
"SCBT" (spatial, channel, batch, time) | "sequence" | "CBT" (channel, batch, time) |
"last" | "CB" (channel, batch) | |
"SSCBT" (spatial, spatial, channel, batch, time) | "sequence" | "CBT" (channel, batch, time) |
"last" | "CB" (channel, batch) | |
"SSSCBT" (spatial, spatial, spatial, channel, batch, time) | "sequence" | "CBT" (channel, batch, time) |
"last" | "CB" (channel, batch) | |
"SC" (spatial, channel) | "sequence" | "CU" (channel, unspecified) |
"last" | ||
"SSC" (spatial, spatial, channel) | "sequence" | |
"last" | ||
"SSSC" (spatial, spatial, spatial, channel) | "sequence" | |
"last" | ||
"CT" (channel, time) | "sequence" | "CT" (channel, time) |
"last" | "CU" (channel, unspecified) | |
"SCT" (spatial, channel, time) | "sequence" | "CT" (channel, time) |
"last" | "CU" (channel, unspecified) | |
"SSCT" (spatial, spatial, channel, time) | "sequence" | "CT" (channel, time) |
"last" | "CU" (channel, unspecified) | |
"SSSCT" (spatial, spatial, channel, time) | "sequence" | "CT" (channel, time) |
"last" | "CU" (channel, unspecified) | |
"SSB" (spatial, spatial, batch) | "sequence" | "CB" (channel, batch) |
"last" | ||
"SSSB" (spatial, spatial, spatial, batch) | "sequence" | |
"last" | ||
"BT" (batch, time) | "sequence" | "CBT" (channel, batch, time) |
"last" | "CB" (channel, batch) | |
"SBT" (spatial, batch, time) | "sequence" | "CBT" (channel, batch, time) |
"last" | "CB" (channel, batch) | |
"SSBT" (spatial, spatial, batch, time) | "sequence" | "CBT" (channel, batch, time) |
"last" | "CB" (channel, batch) | |
"SSSBT" (spatial, spatial, spatial, batch, time) | "sequence" | "CBT" (channel, batch, time) |
"last" | "CB" (channel, batch) |
If the HasStateInputs
property is 1
(true
), then the layer has two additional inputs with the names"hidden"
and "cell"
, which correspond to the hidden state and cell state, respectively. These additional inputs expect input format"CB"
(channel, batch).
If the HasStateOutputs
property is 1
(true
), then the layer has two additional outputs with names"hidden"
and "cell"
, which correspond to the hidden state and cell state, respectively. These additional outputs have output format"CB"
(channel, batch).
References
[1] Hochreiter, S, and J. Schmidhuber, 1997. Long short-term memory.Neural computation, 9(8), pp.1735–1780.
[2] Glorot, Xavier, and Yoshua Bengio. "Understanding the Difficulty of Training Deep Feedforward Neural Networks." In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–356. Sardinia, Italy: AISTATS, 2010. https://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf
[3] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification." In 2015 IEEE International Conference on Computer Vision (ICCV), 1026–34. Santiago, Chile: IEEE, 2015. https://doi.org/10.1109/ICCV.2015.123
[4] Saxe, Andrew M., James L. McClelland, and Surya Ganguli. "Exact Solutions to the Nonlinear Dynamics of Learning in Deep Linear Neural Networks.” Preprint, submitted February 19, 2014. https://arxiv.org/abs/1312.6120.
Extended Capabilities
Usage notes and limitations:
- Code generation does not support passing
dlarray
objects with unspecified (U) dimensions to this layer. - For code generation in general, the
HasStateInputs
andHasStateOutputs
properties must be set to0
(false). - For code generation, you must pass a
dlarray
object with a channel (C) dimension as the input to this layer. For example, code generation supports data format such as "SSC" or "SSCBT". - When generating code with Intel® MKL-DNN or ARM® Compute Library:
- The
StateActivationFunction
property must be set to"tanh"
. - The
GateActivationFunction
property must be set to"sigmoid"
.
- The
Usage notes and limitations:
- Code generation does not support passing
dlarray
objects with unspecified (U) dimensions to this layer. - For code generation, you must pass a
dlarray
object with a channel (C) dimension as the input to this layer. For example, code generation supports data format such as "SSC" or "SSCBT". - The
HasStateInputs
andHasStateOutputs
properties must be set to0
(false
). - When generating code with NVIDIA® TensorRT or CUDA deep neural network (cuDNN) library:
- The
StateActivationFunction
property must be set to"tanh"
. - The
GateActivationFunction
property must be set to"sigmoid"
.
- The
Version History
Introduced in R2017b
Starting in R2019a, the software, by default, initializes the layer input weights of this layer using the Glorot initializer. This behavior helps stabilize training and usually reduces the training time of deep neural networks.
In previous releases, the software, by default, initializes the layer input weights using the by sampling from a normal distribution with zero mean and variance 0.01. To reproduce this behavior, set the InputWeightsInitializer
option of the layer to"narrow-normal"
.
Starting in R2019a, the software, by default, initializes the layer recurrent weights of this layer with Q, the orthogonal matrix given by the QR decomposition of Z = Q R for a random matrix Z sampled from a unit normal distribution. This behavior helps stabilize training and usually reduces the training time of deep neural networks.
In previous releases, the software, by default, initializes the layer recurrent weights using the by sampling from a normal distribution with zero mean and variance 0.01. To reproduce this behavior, set the RecurrentWeightsInitializer
option of the layer to "narrow-normal"
.