SelfAttentionLayer - Self-attention layer - MATLAB (original) (raw)
Self-attention layer
Since R2023a
Description
A self-attention layer computes single-head or multihead self-attention of its input.
The layer:
- Computes the queries, keys, and values from the input
- Computes the scaled dot-product attention across heads using the queries, keys, and values
- Merges the results from the heads
- Performs a linear transformation on the merged result
Creation
Syntax
Description
`layer` = selfAttentionLayer(numHeads,numKeyChannels,`Name=Value`)
sets the optional NumValueChannels, OutputSize, HasPaddingMaskInput, AttentionMask, DropoutProbability, HasScoresOutput, Parameters and Initialization, Learning Rate and Regularization, and Name properties.
Properties
Self-Attention
This property is read-only.
Number of attention heads, specified as a positive integer that evenly dividesNumKeyChannels
.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
This property is read-only.
Number of channels for the keys and queries, specified as a positive integer that is divisible by NumHeads.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
| char
| string
Number of channels for the values, specified as one of these values:
"auto"
— Use NumKeyChannels.- Positive integer — Use the specified number of channels. This value must be divisible by NumHeads.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
| char
| string
Number of channels of the layer output, specified as one of these values:
"auto"
— Use the number of channels in the layer input.- Positive integer — Use the specified number of channels.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
| char
| string
Flag indicating whether the layer has an input that represents the padding mask, specified as 0
(false
) or 1
(true
).
If the HasPaddingMaskInput
property is 0
(false
), then the layer has one input with the name"in"
, which corresponds to the input data. In this case, the layer treats all elements as data.
If the HasPaddingMaskInput
property is 1
(true
), then the layer has two inputs with the names"in"
and "mask"
, which correspond to the input data and the mask, respectively. In this case, the padding mask is an array of ones and zeros. The layer uses and ignores elements of the input when the corresponding element in the mask is one or zero, respectively.
The format of the padding mask must match that of the input. The size of the "S"
(spatial), "T"
(time), and "B"
(batch) dimensions of the padding mask must match the size of the corresponding dimensions in the input.
The padding mask can have any number of channels. The software uses the values in the first channel only to indicate padding values.
Mask preventing attention to elements in key-value pairs, specified as one of these values:
"none"
— Do not prevent attention to elements based on their positions. IfHasPaddingMaskInput
is1
(true
), then the layer prevents attention to padding elements only."causal"
— Prevent elements in positionM
from attending to elements in positionN
, whereN
is greater thanM
. Use this option for auto-regressive models.
Probability of dropping out attention scores, specified as a scalar in the range [0, 1).
During training, the software randomly sets values in the attention scores to zero using the specified probability. These dropouts can encourage the model to learn more robust and generalizable representations by preventing it from relying too heavily on specific dependencies.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Flag indicating whether the layer has an output that represents the scores (also known as the attention weights), specified as 0
(false
) or1
(true
).
If the HasScoresOutput
property is 0
(false
), then the layer has one output with the name"out"
, which corresponds to the output data.
If the HasScoresOutput
property is 1
(true
), then the layer has two inputs with the names"out"
and "scores"
, which correspond to the output data and the attention scores, respectively.
This property is read-only.
Number of input channels, specified as one of these values:
"auto"
— Automatically determine the number of input channels when you initialize the network- Positive integer — Configure the layer for the specified number of input channels.
InputSize
and the number of channels in the layer input data must match.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
| char
| string
Parameters and Initialization
Function to initialize the query, key, value, and output weights, specified as one of these values:
"glorot"
– Initialize the weights with the Glorot initializer (also known as Xavier initializer) [2]. The Glorot initializer independently samples from a uniform distribution with zero mean and a variance of2/(numIn + numOut)
. The values ofnumIn
andnumOut
depend on the weight matrix:Weight numIn numOut Query InputSize NumKeyChannels Key InputSize NumKeyChannels Value InputSize NumValueChannels Output NumValueChannels OutputSize "he"
– Initialize the weights with the He initializer[3]. The He initializer samples from a normal distribution with zero mean and a variance of2/numIn
. The values ofnumIn
andnumOut
depend on the weight matrix:Weight numIn numOut Query InputSize NumKeyChannels Key InputSize NumKeyChannels Value InputSize NumValueChannels Output NumValueChannels OutputSize "narrow-normal"
— Initialize the weights by independently sampling from a normal distribution with zero mean and a standard deviation of 0.01."zeros"
— Initialize the weights with zeros."ones"
— Initialize the weights with ones.- Function handle — Initialize the weights with a custom function. If you specify a function handle, then the function must have the form
weights = func(sz)
, wheresz
is the size of the weights. For an example, see Specify Custom Weight Initialization Function.
The layer only initializes the weights when the corresponding weights property is empty.
Data Types: char
| string
| function_handle
Function to initialize the query, key, value, and output biases, specified as one of these values:
"zeros"
— Initialize the biases with zeros."ones"
— Initialize the biases with ones."narrow-normal"
— Initialize the biases by independently sampling from a normal distribution with zero mean and a standard deviation of 0.01.- Function handle — Initialize the biases with a custom function. If you specify a function handle, then the function must have the form
bias = func(sz)
, wheresz
is the size of the biases.
The layer only initializes the biases when the corresponding bias property is empty.
Data Types: char
| string
| function_handle
Query weights, specified as a NumKeyChannels-by-numInputChannels
matrix or[]
, where numInputChannels
is the number of channels in the layer input.
Data Types: single
| double
Key weights, specified as a NumKeyChannels-by-numInputChannels
matrix or[]
, where numInputChannels
is the number of channels in the layer input.
Data Types: single
| double
Value weights, specified as a NumValueChannels-by-numInputChannels
matrix or[]
, where numInputChannels
is the number of channels in the layer input.
Data Types: single
| double
Data Types: single
| double
Query biases, specified as a NumKeyChannels-by-1
vector or[]
.
Data Types: single
| double
Key biases, specified as a NumKeyChannels-by-1
vector or[]
.
Data Types: single
| double
Data Types: single
| double
Output biases, specified as an OutputSize-by-1
vector or[]
.
Data Types: single
| double
Learning Rate and Regularization
Learning rate factor for the query, key, value, and output weights, specified as a nonnegative scalar.
The software multiplies this factor by the global learning rate to determine the learning rate for the weights in this layer. For example, if WeightLearnRateFactor
is 2
, then the learning rate for the weights in this layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using the trainingOptions function.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Learning rate factor for the query, key, value, and output biases, specified as a nonnegative scalar.
The software multiplies this factor by the global learning rate to determine the learning rate for the biases in this layer. For example, if BiasLearnRateFactor
is 2
, then the learning rate for the biases in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using the trainingOptions function.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
L2 regularization factor for the query, key, value, and output weights, specified as a nonnegative scalar.
The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization for the weights in this layer. For example, if WeightL2Factor
is 2
, then the L2 regularization for the weights in this layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the trainingOptions function.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
L2 regularization factor for the query, key, value, and output biases, specified as a nonnegative scalar.
The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization for the biases in this layer. For example, if BiasL2Factor
is 2
, then the L2 regularization for the biases in this layer is twice the global L2 regularization factor. The software determines the global L2 regularization factor based on the settings you specify using the trainingOptions function.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Layer
Data Types: char
| string
Number of inputs to the layer, returned as 1
or2
.
If the HasPaddingMaskInput
property is 0
(false
), then the layer has one input with the name"in"
, which corresponds to the input data. In this case, the layer treats all elements as data.
If the HasPaddingMaskInput
property is 1
(true
), then the layer has two inputs with the names"in"
and "mask"
, which correspond to the input data and the mask, respectively. In this case, the padding mask is an array of ones and zeros. The layer uses and ignores elements of the input when the corresponding element in the mask is one or zero, respectively.
The format of the padding mask must match that of the input. The size of the "S"
(spatial), "T"
(time), and "B"
(batch) dimensions of the padding mask must match the size of the corresponding dimensions in the input.
The padding mask can have any number of channels. The software uses the values in the first channel only to indicate padding values.
Data Types: double
Input names of the layer, returned as a cell array of character vectors.
If the HasPaddingMaskInput
property is 0
(false
), then the layer has one input with the name"in"
, which corresponds to the input data. In this case, the layer treats all elements as data.
If the HasPaddingMaskInput
property is 1
(true
), then the layer has two inputs with the names"in"
and "mask"
, which correspond to the input data and the mask, respectively. In this case, the padding mask is an array of ones and zeros. The layer uses and ignores elements of the input when the corresponding element in the mask is one or zero, respectively.
The format of the padding mask must match that of the input. The size of the "S"
(spatial), "T"
(time), and "B"
(batch) dimensions of the padding mask must match the size of the corresponding dimensions in the input.
The padding mask can have any number of channels. The software uses the values in the first channel only to indicate padding values.
The SelfAttentionLayer
object stores this property as a cell array of character vectors.
This property is read-only.
Number of outputs of the layer.
If the HasScoresOutput
property is 0
(false
), then the layer has one output with the name"out"
, which corresponds to the output data.
If the HasScoresOutput
property is 1
(true
), then the layer has two inputs with the names"out"
and "scores"
, which correspond to the output data and the attention scores, respectively.
Data Types: double
This property is read-only.
Output names of the layer.
If the HasScoresOutput
property is 0
(false
), then the layer has one output with the name"out"
, which corresponds to the output data.
If the HasScoresOutput
property is 1
(true
), then the layer has two inputs with the names"out"
and "scores"
, which correspond to the output data and the attention scores, respectively.
The SelfAttentionLayer
object stores this property as a cell array of character vectors.
Examples
Create a self-attention layer with eight heads and 256 key and query channels.
layer = selfAttentionLayer(8,256)
layer = SelfAttentionLayer with properties:
Name: ''
AttentionMask: 'none'
HasPaddingMaskInput: 0
HasScoresOutput: 0
Hyperparameters InputSize: 'auto' NumHeads: 8 NumKeyChannels: 256 NumValueChannels: 'auto' OutputSize: 'auto' DropoutProbability: 0
Learnable Parameters QueryWeights: [] KeyWeights: [] ValueWeights: [] OutputWeights: [] QueryBias: [] KeyBias: [] ValueBias: [] OutputBias: []
Show all properties
Include a self-attention layer in a layer array.
layers = [ sequenceInputLayer(12) selfAttentionLayer(4,12) layerNormalizationLayer fullyConnectedLayer(9) softmaxLayer];
Algorithms
The attention operation focuses on parts of the input using weighted multiplication operations.
The single-head dot-product attention operation is given by
where:
- Q denotes the queries.
- K denotes the keys.
- V denotes the values.
- λ denotes the scaling factor.
- M is a mask array of ones and zeros.
- p is the dropout probability.
The mask operation includes or excludes the values of the matrix multiplication by setting values of the input to −∞ for zero-valued mask elements. The mask is the union of the padding and attention masks. The softmax function normalizes the value of the input data across the channel dimension such that it sums to one. The dropout operation sets elements to zero with probability p.
The multihead self-attention operation for the input X is given by
where:
- h is the number of heads.
- WQ is a learnable projection matrix for the queries.
- WK is a learnable projection matrix for the keys.
- WV is a learnable projection matrix for the values.
- WO is a learnable projection matrix for the output.
Each weight matrix is composed of concatenated weight matrices Wi for each head. Each headi denotes the output of the head operation given by
Layers in a layer array or layer graph pass data to subsequent layers as formatted dlarray objects. The format of a dlarray
object is a string of characters in which each character describes the corresponding dimension of the data. The format consists of one or more of these characters:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, you can describe 2-D image data that is represented as a 4-D array, where the first two dimensions correspond to the spatial dimensions of the images, the third dimension corresponds to the channels of the images, and the fourth dimension corresponds to the batch dimension, as having the format "SSCB"
(spatial, spatial, channel, batch).
You can interact with these dlarray
objects in automatic differentiation workflows, such as those for developing a custom layer, using a functionLayer object, or using the forward and predict functions withdlnetwork
objects.
This table shows the supported input formats of SelfAttentionLayer
objects and the corresponding output format. If the software passes the output of the layer to a custom layer that does not inherit from the nnet.layer.Formattable
class, or aFunctionLayer
object with the Formattable
property set to 0
(false
), then the layer receives an unformatted dlarray
object with dimensions ordered according to the formats in this table. The formats listed here are only a subset. The layer may support additional formats such as formats with additional "S"
(spatial) or"U"
(unspecified) dimensions.
Input Format | Output Format |
---|---|
"CB" (channel, batch) | "CB" (channel, batch) |
"SCB" (spatial, channel, batch) | "SCB" (spatial, channel, batch) |
"CBT" (channel, batch, time) | "CBT" (channel, batch, time) |
"SC" (spatial, channel) | "SC" (spatial, channel) |
"CT" (channel, time) | "CT" (channel, time) |
"SB" (spatial, batch) | "SCB" (spatial, channel, batch) |
"BT" (batch, time) | "CBT" (channel, batch, time) |
References
[1] Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. "Attention is all you need." In Advances in Neural Information Processing Systems, Vol. 30. Curran Associates, Inc., 2017. https://papers.nips.cc/paper/7181-attention-is-all-you-need.
[2] Glorot, Xavier, and Yoshua Bengio. "Understanding the Difficulty of Training Deep Feedforward Neural Networks." In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–356. Sardinia, Italy: AISTATS, 2010. https://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf
[3] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification." In 2015 IEEE International Conference on Computer Vision (ICCV), 1026–34. Santiago, Chile: IEEE, 2015. https://doi.org/10.1109/ICCV.2015.123
Extended Capabilities
Usage notes and limitations:
- Code generation is not supported when
HasScoresOutput
is set totrue
. - Code generation does not support passing
dlarray
objects with unspecified (U) dimensions to this layer. - For code generation, you must pass a
dlarray
object with a channel (C) dimension as the input to this layer. For example, code generation supports data format such as "SSC" or "SSCBT".
Refer to the usage notes and limitations in the C/C++ Code Generation section. The same limitations apply to GPU code generation.
Version History
Introduced in R2023a