ProjectedLayer - Compressed neural network layer using projection - MATLAB (original) (raw)
Compressed neural network layer using projection
Since R2023b
Description
A projected layer is a compressed neural network layer resulting from projection.
Properties
Projected
This property is read-only.
Class of the original layer, returned as a character vector.
Example: 'nnet.cnn.layer.LSTMLayer'
Data Types: char
This property is read-only.
Proportion of learnables removed in the layer, returned as a scalar in the interval [0,1].
Data Types: double
Neural network that represents projection, returned as adlnetwork
object.
The neural network that represents the projection depends on the type of layer:
For more information, see Projected Layer. To replace the ProjectedLayer
objects in a neural network with the equivalent network that represents the projection, use the unpackProjectedLayers function.
This property is read-only.
Number of input channels, returned as a positive integer.
Data Types: double
This property is read-only.
Number of output channels, returned as a positive integer.
Data Types: double
This property is read-only.
Number of columns of the input projector, returned as a positive integer.
The input projector is the matrix Q used to project the layer input. For more information, see Projected Layer.
Data Types: double
This property is read-only.
Number of columns of the output projector, returned as a positive integer.
The output projector is the matrix Q used to project the layer output. For more information, see Projected Layer.
Data Types: double
Layer
Layer name, specified as a character vector or a string scalar. For Layer
array input, the trainnet and dlnetwork functions automatically assign new unique names to layers that have the name""
.
When you compress a neural network using the compressNetworkUsingProjection, the function replaces projectable layers with ProjectedLayer
objects with the same name.
The ProjectedLayer
object stores this property as a character vector.
Data Types: char
| string
This property is read-only.
Number of inputs to the layer, returned as a positive integer.
Data Types: double
Input names, returned as a cell array of character vectors.
Data Types: cell
This property is read-only.
Number of outputs from the layer, returned as a positive integer.
Data Types: double
Output names, returned as a cell array of character vectors.
Data Types: cell
Examples
Load the pretrained network in dlnetJapaneseVowels
and the training data in JapaneseVowelsTrainData
.
load dlnetJapaneseVowels load JapaneseVowelsTrainData
Create a mini-batch queue containing the training data. To create a mini-batch queue from in-memory data, convert the sequences to an array datastore.
adsXTrain = arrayDatastore(XTrain,OutputType="same");
Create the minibatchqueue
object.
- Specify a mini-batch size of 16.
- Preprocess the mini-batches using the
preprocessMiniBatchPredictors
function, listed in the Mini-Batch Predictors Preprocessing Function section of the example. - Specify that the output data has format
"CTB"
(channel, time, batch).
mbq = minibatchqueue(adsXTrain, ... MiniBatchSize=16, ... MiniBatchFcn=@preprocessMiniBatchPredictors, ... MiniBatchFormat="CTB");
Compress the network.
[netProjected,info] = compressNetworkUsingProjection(net,mbq);
Compressed network has 83.4% fewer learnable parameters. Projection compressed 2 layers: "lstm","fc"
View the network layers.
ans = 4×1 Layer array with layers:
1 'sequenceinput' Sequence Input Sequence input with 12 dimensions
2 'lstm' Projected Layer Projected LSTM with 100 hidden units
3 'fc' Projected Layer Projected fully connected layer with output size 9
4 'softmax' Softmax softmax
View the projected LSTM layer. The LearnablesReduction
property shows the proportion of learnables removed in the layer. The Network
property contains the neural network that represents the projection.
ans = ProjectedLayer with properties:
Name: 'lstm'
OriginalClass: 'nnet.cnn.layer.LSTMLayer'
LearnablesReduction: 0.8408
InputSize: 12
OutputSize: 100
Hyperparameters InputProjectorSize: 8 OutputProjectorSize: 7
Learnable Parameters Network: [1×1 dlnetwork]
State Parameters Network: [1×1 dlnetwork]
Network Learnable Parameters Network/lstm/InputWeights 400×8 dlarray Network/lstm/RecurrentWeights 400×7 dlarray Network/lstm/Bias 400×1 dlarray Network/lstm/InputProjector 12×8 dlarray Network/lstm/OutputProjector 100×7 dlarray
Network State Parameters Network/lstm/HiddenState 100×1 dlarray Network/lstm/CellState 100×1 dlarray
Show all properties
Mini-Batch Predictors Preprocessing Function
The preprocessMiniBatchPredictors
function preprocesses a mini-batch of predictors by extracting the sequence data from the input cell array and truncating them along the second dimension so that they have the same length.
Note: Do not pad sequence data when doing the PCA step for projection as this can negatively impact the analysis. Instead, truncate mini-batches of data to have the same length or use mini-batches of size 1.
function X = preprocessMiniBatchPredictors(dataX)
X = padsequences(dataX,2,Length="shortest");
end
Load the pretrained network in dlnetProjectedJapaneseVowels
.
load dlnetProjectedJapaneseVowels
View the network properties.
net = dlnetwork with properties:
Layers: [4×1 nnet.cnn.layer.Layer]
Connections: [3×2 table]
Learnables: [9×3 table]
State: [2×3 table]
InputNames: {'sequenceinput'}
OutputNames: {'softmax'}
Initialized: 1
View summary with summary.
View the network layers. The network has two projected layers.
ans = 4×1 Layer array with layers:
1 'sequenceinput' Sequence Input Sequence input with 12 dimensions
2 'lstm' Projected Layer Projected LSTM with 100 hidden units
3 'fc' Projected Layer Projected fully connected layer with output size 9
4 'softmax' Softmax softmax
Unpack the projected layers.
netUnpacked = unpackProjectedLayers(net)
netUnpacked = dlnetwork with properties:
Layers: [5×1 nnet.cnn.layer.Layer]
Connections: [4×2 table]
Learnables: [9×3 table]
State: [2×3 table]
InputNames: {'sequenceinput'}
OutputNames: {'softmax'}
Initialized: 1
View summary with summary.
View the unpacked network layers. The unpacked network has a projected LSTM layer and two fully connected layers in place of the projected layers.
ans = 5×1 Layer array with layers:
1 'sequenceinput' Sequence Input Sequence input with 12 dimensions
2 'lstm' Projected LSTM Projected LSTM layer with 100 hidden units, an output projector size of 7, and an input projector size of 8
3 'fc_proj_in' Fully Connected 4 fully connected layer
4 'fc_proj_out' Fully Connected 9 fully connected layer
5 'softmax' Softmax softmax
Tips
- Code generation does not support
ProjectedLayer
objects. To replace ProjectedLayer objects in a neural network with the equivalent neural network that represents the projection, use the unpackProjectedLayers function or set theUnpackProjectedLayers
option of thecompressNetworkUsingProjection
function to1
(true
).
Algorithms
To compress a deep learning network, you can use projected layers. A projected layer is a type of deep learning layer that enables compression by reducing the number of stored learnable parameters. The layer introduces learnable projector matrices_Q_, replaces multiplications of the form Wx, where W is a learnable matrix, with the multiplication WQQ⊤x, and stores Q and W′=WQ instead of storing W. Projecting x into a lower dimensional space using Q typically requires less memory to store the learnable parameters and can have similarly strong prediction accuracy.
For some types of layers, you can represent a projected layer as a neural network containing two or more layers with fewer learnable parameters. For example, you can represent a projected convolution layer as three convolution layers that perform the input projection, convolution, and the output projection operations independently. When you compress a network using the compressNetworkUsingProjection function, the software replaces layers that support projection with ProjectedLayer objects that contain the equivalent neural network. To replace ProjectedLayer objects in a neural network with the equivalent neural network that represents the projection, use the unpackProjectedLayers function or set the UnpackProjectedLayers
option of the compressNetworkUsingProjection
function to 1
(true
).
The compressNetworkUsingProjection function supports projecting these layers:
- convolution1dLayer (since R2024b)
- convolution2dLayer
- fullyConnectedLayer
- lstmLayer
- gruLayer
The compressNetworkUsingProjection function replaces projectable layers withProjectedLayer objects. A ProjectedLayer
object contains information about the projection operation and contains the neural network that represents the projection.
The neural network that represents the projection depends on the type of layer:
Layers in a layer array or layer graph pass data to subsequent layers as formatted dlarray objects. The format of a dlarray
object is a string of characters in which each character describes the corresponding dimension of the data. The format consists of one or more of these characters:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, you can describe 2-D image data that is represented as a 4-D array, where the first two dimensions correspond to the spatial dimensions of the images, the third dimension corresponds to the channels of the images, and the fourth dimension corresponds to the batch dimension, as having the format "SSCB"
(spatial, spatial, channel, batch).
You can interact with these dlarray
objects in automatic differentiation workflows, such as those for developing a custom layer, using a functionLayer object, or using the forward and predict functions withdlnetwork
objects.
To learn more about the supported input and output formats of aProjectedLayer
object, see the documentation for the layer given by theOriginalClass
property.
References
Version History
Introduced in R2023b