GroupNormalizationLayer - Group normalization layer - MATLAB (original) (raw)

Group normalization layer

Since R2020b

Description

A group normalization layer normalizes a mini-batch of data across grouped subsets of channels for each observation independently. To speed up training of the convolutional neural network and reduce the sensitivity to network initialization, use group normalization layers between convolutional layers and nonlinearities, such as ReLU layers.

After normalization, the layer scales the input with a learnable scale factor_γ_ and shifts it by a learnable offset_β_.

Creation

Syntax

Description

`layer` = groupNormalizationLayer([numGroups](#mw%5F01595f5e-07eb-4544-888a-76efd3e94e8e)) creates a group normalization layer.

example

Input Arguments

expand all

numGroups — Number of groups

positive integer | 'all-channels' | 'channel-wise'

Number of groups into which to divide the channels of the input data, specified as one of the following:

Properties

expand all

Group Normalization

Epsilon — Constant to add to mini-batch variances

1e-5 (default) | positive scalar

Constant to add to the mini-batch variances, specified as a positive scalar.

The software adds this constant to the mini-batch variances before normalization to ensure numerical stability and avoid division by zero.

Before R2023a: Epsilon must be greater than or equal to 1e-5.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

NumChannels — Number of input channels

"auto" (default) | positive integer

This property is read-only.

Number of input channels, specified as one of the following:

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | char | string

Parameters and Initialization

ScaleInitializer — Function to initialize channel scale factors

'ones' (default) | 'narrow-normal' | function handle

Function to initialize the channel scale factors, specified as one of the following:

The layer only initializes the channel scale factors when the Scale property is empty.

Data Types: char | string | function_handle

OffsetInitializer — Function to initialize channel offsets

'zeros' (default) | 'ones' | 'narrow-normal' | function handle

Function to initialize the channel offsets, specified as one of the following:

The layer only initializes the channel offsets when the Offset property is empty.

Data Types: char | string | function_handle

Scale — Channel scale factors

[] (default) | numeric array

Channel scale factors γ, specified as a numeric array.

The channel scale factors are learnable parameters. When you train a network using thetrainnet function or initialize a dlnetwork object, if Scale is nonempty, then the software uses the Scale property as the initial value. If Scale is empty, then the software uses the initializer specified byScaleInitializer.

Depending on the type of layer input, the trainnet anddlnetwork functions automatically reshape this property to have of the following sizes:

Layer Input Property Size
feature input NumChannels-by-1
vector sequence input
1-D image input (since R2023a) 1-by-NumChannels
1-D image sequence input (since R2023a)
2-D image input 1-by-1-by-NumChannels
2-D image sequence input
3-D image input 1-by-1-by-1-by-NumChannels
3-D image sequence input

Data Types: single | double

Offset — Channel offsets

[] (default) | numeric array

Channel offsets β, specified as a numeric vector.

The channel offsets are learnable parameters. When you train a network using the trainnet function or initialize a dlnetwork object, if Offset is nonempty, then the software uses the Offset property as the initial value. If Offset is empty, then the software uses the initializer specified byOffsetInitializer.

Depending on the type of layer input, the trainnet anddlnetwork functions automatically reshape this property to have of the following sizes:

Layer Input Property Size
feature input NumChannels-by-1
vector sequence input
1-D image input (since R2023a) 1-by-NumChannels
1-D image sequence input (since R2023a)
2-D image input 1-by-1-by-NumChannels
2-D image sequence input
3-D image input 1-by-1-by-1-by-NumChannels
3-D image sequence input

Data Types: single | double

Learning Rate and Regularization

ScaleLearnRateFactor — Learning rate factor for scale factors

1 (default) | nonnegative scalar

Learning rate factor for the scale factors, specified as a nonnegative scalar.

The software multiplies this factor by the global learning rate to determine the learning rate for the scale factors in a layer. For example, if ScaleLearnRateFactor is 2, then the learning rate for the scale factors in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

OffsetLearnRateFactor — Learning rate factor for offsets

1 (default) | nonnegative scalar

Learning rate factor for the offsets, specified as a nonnegative scalar.

The software multiplies this factor by the global learning rate to determine the learning rate for the offsets in a layer. For example, if OffsetLearnRateFactor is 2, then the learning rate for the offsets in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings specified with the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

ScaleL2Factor — L2 regularization factor for scale factors

1 (default) | nonnegative scalar

L2 regularization factor for the scale factors, specified as a nonnegative scalar.

The software multiplies this factor by the global L2 regularization factor to determine the learning rate for the scale factors in a layer. For example, ifScaleL2Factor is 2, then the L2 regularization for the offsets in the layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

OffsetL2Factor — L2 regularization factor for offsets

1 (default) | nonnegative scalar

L2 regularization factor for the offsets, specified as a nonnegative scalar.

The software multiplies this factor by the global L2 regularization factor to determine the learning rate for the offsets in a layer. For example, ifOffsetL2Factor is 2, then the L2 regularization for the offsets in the layer is twice the global L2 regularization factor. You can specify the global L2 regularization factor using the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Layer

Layer name, specified as a character vector or string scalar. For Layer array input, the trainnet anddlnetwork functions automatically assign names to layers with the name "".

The GroupNormalizationLayer object stores this property as a character vector.

Data Types: char | string

NumInputs — Number of inputs

1 (default)

This property is read-only.

Number of inputs to the layer, returned as 1. This layer accepts a single input only.

Data Types: double

InputNames — Input names

{'in'} (default)

This property is read-only.

Input names, returned as {'in'}. This layer accepts a single input only.

Data Types: cell

NumOutputs — Number of outputs

1 (default)

This property is read-only.

Number of outputs from the layer, returned as 1. This layer has a single output only.

Data Types: double

OutputNames — Output names

{'out'} (default)

This property is read-only.

Output names, returned as {'out'}. This layer has a single output only.

Data Types: cell

Examples

collapse all

Create Group Normalization Layer

Create a group normalization layer that normalizes incoming data across three groups of channels. Name the layer groupnorm.

layer = groupNormalizationLayer(3,Name="groupnorm")

layer = GroupNormalizationLayer with properties:

       Name: 'groupnorm'
NumChannels: 'auto'

Hyperparameters NumGroups: 3 Epsilon: 1.0000e-05

Learnable Parameters Offset: [] Scale: []

Use properties method to see a list of all properties.

Include a group normalization layer in a Layer array. Normalize the incoming 20 channels in four groups.

layers = [ imageInputLayer([28 28 3]) convolution2dLayer(5,20) groupNormalizationLayer(4) reluLayer maxPooling2dLayer(2,Stride=2) fullyConnectedLayer(10) softmaxLayer]

layers = 7x1 Layer array with layers:

 1   ''   Image Input           28x28x3 images with 'zerocenter' normalization
 2   ''   2-D Convolution       20 5x5 convolutions with stride [1  1] and padding [0  0  0  0]
 3   ''   Group Normalization   Group normalization
 4   ''   ReLU                  ReLU
 5   ''   2-D Max Pooling       2x2 max pooling with stride [2  2] and padding [0  0  0  0]
 6   ''   Fully Connected       10 fully connected layer
 7   ''   Softmax               softmax

More About

expand all

Group Normalization Layer

A group normalization layer divides the channels of the input data into groups and normalizes the activations across each group. To speed up training of convolutional neural networks and reduce the sensitivity to network initialization, use group normalization layers between convolutional layers and nonlinearities, such as ReLU layers.

You can also use a group normalization layer to perform layer normalization or instance normalization. Layer normalization combines and normalizes activations across all channels in a single observation. Instance normalization normalizes the activations of each channel of the observation separately.

The layer first normalizes the activations of each group by subtracting the group mean and dividing by the group standard deviation. Then, the layer shifts the input by a learnable offset β and scales it by a learnable scale factor γ.

Group normalization layers normalize the activations and gradients propagating through a neural network, making network training an easier optimization problem. To take full advantage of this fact, you can try increasing the learning rate. Since the optimization problem is easier, the parameter updates can be larger and the network can learn faster. You can also try reducing the L2 and dropout regularization.

You can use a group normalization layer in place of a batch normalization layer. Doing so is particularly useful when training with small batch sizes, as it can increase the stability of training.

Algorithms

expand all

Group Normalization Layer

The group normalization operation normalizes the elements_xi_ of the input by first calculating the mean_μG_ and variance_σG2_ over spatial, time, and grouped subsets of the channel dimensions for each observation independently. Then, it calculates the normalized activations as

where ϵ is a constant that improves numerical stability when the variance is very small. To allow for the possibility that inputs with zero mean and unit variance are not optimal for the operations that follow group normalization, the group normalization operation further shifts and scales the activations using the transformation

where the offset β and scale factor_γ_ are learnable parameters that are updated during network training.

Layer Input and Output Formats

Layers in a layer array or layer graph pass data to subsequent layers as formatted dlarray objects. The format of a dlarray object is a string of characters in which each character describes the corresponding dimension of the data. The formats consist of one or more of these characters:

For example, you can describe 2-D image data that is represented as a 4-D array, where the first two dimensions correspond to the spatial dimensions of the images, the third dimension corresponds to the channels of the images, and the fourth dimension corresponds to the batch dimension, as having the format "SSCB" (spatial, spatial, channel, batch).

You can interact with these dlarray objects in automatic differentiation workflows, such as those for developing a custom layer, using a functionLayer object, or using the forward and predict functions withdlnetwork objects.

This table shows the supported input formats of GroupNormalizationLayer objects and the corresponding output format. If the software passes the output of the layer to a custom layer that does not inherit from the nnet.layer.Formattable class, or aFunctionLayer object with the Formattable property set to 0 (false), then the layer receives an unformatted dlarray object with dimensions ordered according to the formats in this table. The formats listed here are only a subset. The layer may support additional formats such as formats with additional "S" (spatial) or"U" (unspecified) dimensions.

Input Format Output Format
"CB" (channel, batch) "CB" (channel, batch)
"SCB" (spatial, channel, batch) "SCB" (spatial, channel, batch)
"SSCB" (spatial, spatial, channel, batch) "SSCB" (spatial, spatial, channel, batch)
"SSSCB" (spatial, spatial, spatial, channel, batch) "SSSCB" (spatial, spatial, spatial, channel, batch)
"CBT" (channel, batch, time) "CBT" (channel, batch, time)
"SCBT" (spatial, channel, batch, time) "SCBT" (spatial, channel, batch, time)
"SSCBT" (spatial, spatial, channel, batch, time) "SSCBT" (spatial, spatial, channel, batch, time)
"SSSCBT" (spatial, spatial, spatial, channel, batch, time) "SSSCBT" (spatial, spatial, spatial, channel, batch, time)
"CU" (channel, unspecified) "CU" (channel, unspecified)
"SC" (spatial, channel) "SC" (spatial, channel)
"SSC" (spatial, spatial, channel) "SSC" (spatial, spatial, channel)
"SSSC" (spatial, spatial, spatial, channel) "SSSC" (spatial, spatial, spatial, channel)

In dlnetwork objects, GroupNormalizationLayer objects also support these input and output format combinations.

Input Format Output Format
"CT" (channel, time) "CT" (channel, time)
"SCT" (spatial, channel, time) "SCT" (spatial, channel, time)
"SSCT" (spatial, spatial, channel, time) "SSCT" (spatial, spatial, channel, time)
"SSSCT" (spatial, spatial, spatial, channel, time) "SSSCT" (spatial, spatial, spatial, channel, time)

References

[1] Wu, Yuxin, and Kaiming He. “Group Normalization.” Preprint submitted June 11, 2018. https://arxiv.org/abs/1803.08494.

Extended Capabilities

C/C++ Code Generation

Generate C and C++ code using MATLAB® Coder™.

GPU Code Generation

Generate CUDA® code for NVIDIA® GPUs using GPU Coder™.

Version History

Introduced in R2020b

expand all

R2023a: Epsilon supports values less than 1e-5

The Epsilon option also supports positive values less than 1e-5.

R2023a: Layer supports 1-D image sequence data

GroupNormalizationLayer objects support normalizing 1-D image sequence data (data with one spatial and one time dimension).