List of Deep Learning Layer Blocks and Subsystems - MATLAB & Simulink (original) (raw)
This page provides a list of deep learning layer blocks and subsystems in SimulinkĀ®. To export a MATLABĀ® object-based network to a Simulink model that uses deep learning layer blocks and subsystems, use the exportNetworkToSimulink function. Use layer blocks for networks that have a small number of learnable parameters and that you intend to deploy to embedded hardware.
Deep Learning Layer Blocks
The exportNetworkToSimulink function generates these blocks and subsystems to represent layers in a network. Each block and subsystem corresponds to a layer object in MATLAB. For each layer in a network, the function generates the corresponding block or subsystem. If no corresponding block or subsystem exists, then the function generates a placeholder subsystem that contains an Assertion (Simulink) block.
Some layer blocks and subsystems have reduced functionality compared to the corresponding layer objects. The Limitations column in the tables in this section lists conditions where the blocks and subsystems do not have parity with the corresponding layer objects. Unless otherwise specified in the Limitations column, theexportNetworkToSimulink function throws an error for layer objects that have unsupported configurations.
A clipped ReLU layer performs a threshold operation, where any input value less than zero is set to zero and any value above the clipping ceiling is set to that clipping ceiling.
An addition layer adds inputs from multiple neural network layers element-wise.
The additionLayer object accepts scalar and vector inputs and expands those inputs to have the same dimensions as the matrix inputs, but the Addition Layer block supports expanding only scalar inputs.
A concatenation layer takes inputs and concatenates them along a specified dimension. The inputs must have the same size in all dimensions except the concatenation dimension.
A multiplication layer multiplies inputs from multiple neural network layers element-wise.
The multiplicationLayer object accepts scalar and vector inputs and expands those inputs to have the same dimensions as the matrix inputs, but the Multiplication Layer block supports expanding only scalar inputs.
A 1-D convolutional layer applies sliding convolutional filters to 1-D input.
The Layer parameter has limited support for the 'manual' padding mode and does not support the 'causal' padding mode. It is recommended to use a convolution layer object that has thePaddingMode property set to'same'.The Layer parameter does not support convolution layer objects that have thePaddingValue property set to"symmetric-exclude-edge". If you specify an object that uses that padding value, the block produces a warning and uses the value"symmetric-include-edge" instead.The Layer parameter does not support convolution layer objects that have theDilationFactor property set to a value other than 1.
The Rescale-Symmetric 1D block inputs 1-dimensional data to a neural network and rescales the input to be in the range [-1, 1].
The Layer parameter does not support objects that have theSplitComplexInputs property set to 1 (true).The 2D and 3D blocks support only input data that has 1 or 3 channels corresponding to grayscale or RGB image data, respectively.
The Zerocenter 1D block inputs 1-dimensional data to a neural network and rescales the input by subtracting the value of the Mean property of the layer object that you pass into the block.
imageInputLayer that has theNormalization property set to"zerocenter"
The Zerocenter 2D block inputs 2-dimensional image data to a neural network and rescales the input by subtracting the value of the Mean property of the layer object that you pass into the block.
image3dInputLayer that has theNormalization property set to"zerocenter"
The Zerocenter 3D block inputs 3-dimensional image data to a neural network and rescales the input by subtracting the value of the Mean property of the layer object that you pass into the block.
The Zscore 1D block inputs 1-dimensional data to a neural network and rescales the input by subtracting the value of the Mean property of the layer object that you pass into the block, then dividing by the value of the StandardDeviation property.
imageInputLayer that has theNormalization property set to"zscore"
The Zscore 2D block inputs 2-dimensional image data to a neural network and rescales the input by subtracting the value of the Mean property of the layer object that you pass into the block, then dividing by the value of the StandardDeviation property.
The Zscore 3D block inputs 3-dimensional image data to a neural network and rescales the input by subtracting the value of the Mean property of the layer object that you pass into the block, then dividing by the value of the StandardDeviation property.
Exporting networks with input layer objects that have theSplitComplexInputs property set to 1 (true) is not supported.
A layer normalization layer normalizes a mini-batch of data across all channels.
If you set the Data format parameter to SSC orSSSC, theLayer parameter does not supportlayerNormalizationLayer objects that have the OperationDimension set to'channel-only'.
A 1-D average pooling layer performs downsampling by dividing the input into 1-D pooling regions, then computing the average of each region.
The Layer parameter has limited support for the 'manual' padding mode. It is recommended to use an average pooling layer object that has the PaddingMode property set to 'same'.The Layer parameter does not support average pooling layer objects that have thePaddingValue property set to"mean". If you specify an object that uses that padding value, the block produces a warning and uses the value 0 instead.
A 3-D average pooling layer performs downsampling by dividing three-dimensional input into cuboidal pooling regions, then computing the average values of each region.
A 1-D max pooling layer performs downsampling by dividing the input into 1-D pooling regions, then computing the maximum of each region.
The Layer parameter has limited support for the 'manual' padding mode. It is recommended to use a max pooling layer object that has the PaddingMode property set to'same'.
An LSTM layer is an RNN layer that learns long-term dependencies between time steps in time-series and sequence data. The layer performs additive interactions, which can help improve gradient flow over long sequences during training.
The Layer parameter does not accept lstmLayer orlstmProjectedLayer objects that have theHasStateInputs orHasStateOutputs properties set to1 (true).
An LSTM projected layer is an RNN layer that learns long-term dependencies between time steps in time-series and sequence data using projected learnable weights.
At training time, a dropout layer randomly sets input elements to zero with a given probability. At prediction time, the output of a dropout layer is equal to its input.Because deep learning layer blocks can be used only for prediction, this block has no effect and serves only as a conversion of dropoutLayer objects in the output of the exportNetworkToSimulink function.
Neural ODE Layers
Subsystem
Corresponding Layer Object
Description
Limitations
Integrator block as ODE solver and ODE network represented as layer blocks
A neural ODE layer learns to represent dynamic behavior as a system of ODEs.
The subsystem supports continuous-time integration only. For discrete time integration (for example, for fixed-point conversion applications), replace the integrator block in the subsystem with a discrete-time integrator block.