Convolution1DLayer - 1-D convolutional layer - MATLAB (original) (raw)

1-D convolutional layer

Since R2021b

Description

A 1-D convolutional layer applies sliding convolutional filters to 1-D input. The layer convolves the input by moving the filters along the input and computing the dot product of the weights and the input, then adding a bias term.

The dimension that the layer convolves over depends on the layer input:

Creation

Syntax

Description

`layer` = convolution1dLayer([filterSize](#mw%5F20b4d6d9-c0af-404c-88b2-2d485865fbd3),[numFilters](#mw%5Fb770e561-b00c-4a3f-9f05-3325b05de3a5),[Name=Value](#namevaluepairarguments)) sets optional properties using one or more name-value arguments.

Input Arguments

expand all

Width of the filters, specified as a positive integer.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Number of filters, specified as a positive integer. This number corresponds to the number of neurons in the layer that connect to the same region in the input. This parameter determines the number of channels (feature maps) in the layer output.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Name-Value Arguments

expand all

Specify optional pairs of arguments asName1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: convolution1dLayer(11,96,Padding=1) creates a 1-D convolutional layer with 96 filters of size 11, and specifies padding of size 1 on the left and right of the layer input.

Step size for traversing the input, specified as a positive integer.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Factor for dilated convolution (also known as atrous convolution), specified as a positive integer.

Use dilated convolutions to increase the receptive field (the area of the input that the layer can see) of the layer without increasing the number of parameters or computation.

The layer expands the filters by inserting zeros between each filter element. The dilation factor determines the step size for sampling the input, or equivalently, the upsampling factor of the filter. It corresponds to an effective filter size of(FilterSize – 1) .* DilationFactor + 1. For example, a 1-by-3 filter with a dilation factor of2 is equivalent to a 1-by-5 filter with zeros between the elements.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Padding to apply to the input, specified as one of the following:

Example: Padding=[2 1] adds padding of size 2 to the left and size 1 to the right of the input.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | char | string

Value to pad data, specified as one of the following:

PaddingValue Description Example
Scalar Pad with the specified scalar value. [314]→[0031400]
"symmetric-include-edge" Pad using mirrored values of the input, including the edge values. [314]→[1331441]
"symmetric-exclude-edge" Pad using mirrored values of the input, excluding the edge values. [314]→[4131413]
"replicate" Pad using repeated border elements of the input. [314]→[3331444]

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | char | string

Number of input channels, specified as one of the following:

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | char | string

Function to initialize the weights, specified as one of the following:

The layer only initializes the weights when theWeights property is empty.

Data Types: char | string | function_handle

Function to initialize the biases, specified as one of these values:

The layer initializes the biases only when theBias property is empty.

Data Types: char | string | function_handle

Layer weights for the transposed convolution operation, specified as aFilterSize-by-NumChannels-by-numFilters numeric array or [].

The layer weights are learnable parameters. You can specify the initial value of the weights directly using the Weights property of the layer. When you train a network, if the Weights property of the layer is nonempty, then the trainnet and trainNetwork functions use the Weights property as the initial value. If the Weights property is empty, then the software uses the initializer specified by the WeightsInitializer property of the layer.

Data Types: single | double

Layer biases for the transposed convolutional operation, specified as a 1-by-NumFilters numeric array or[].

The layer biases are learnable parameters. When you train a neural network, if Bias is nonempty, then the trainnet and trainNetwork functions use the Bias property as the initial value. If Bias is empty, then software uses the initializer specified by BiasInitializer.

Data Types: single | double

Learning rate factor for the weights, specified as a nonnegative scalar.

The software multiplies this factor by the global learning rate to determine the learning rate for the weights in this layer. For example, ifWeightLearnRateFactor is2, then the learning rate for the weights in this layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Learning rate factor for the biases, specified as a nonnegative scalar.

The software multiplies this factor by the global learning rate to determine the learning rate for the biases in this layer. For example, if BiasLearnRateFactor is 2, then the learning rate for the biases in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

L2 regularization factor for the weights, specified as a nonnegative scalar.

The software multiplies this factor by the global_L2_ regularization factor to determine the_L2_ regularization for the weights in this layer. For example, ifWeightL2Factor is 2, then the L2 regularization for the weights in this layer is twice the global_L2_ regularization factor. You can specify the global_L2_ regularization factor using the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

L2 regularization factor for the biases, specified as a nonnegative scalar.

The software multiplies this factor by the global_L2_ regularization factor to determine the_L2_ regularization for the biases in this layer. For example, ifBiasL2Factor is 2, then the L2 regularization for the biases in this layer is twice the global_L2_ regularization factor. The software determines the global_L2_ regularization factor based on the settings you specify using the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Layer name, specified as a character vector or a string scalar. For Layer array input, the trainnet and dlnetwork functions automatically assign names to layers with the name "".

This argument sets the Name property.

Data Types: char | string

Properties

expand all

Convolution

This property is read-only.

Width of the filters, specified as a positive integer.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

This property is read-only.

Number of filters, specified as a positive integer. This number corresponds to the number of neurons in the layer that connect to the same region in the input. This parameter determines the number of channels (feature maps) in the layer output.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Step size for traversing the input, specified as a positive integer.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Factor for dilated convolution (also known as atrous convolution), specified as a positive integer.

Use dilated convolutions to increase the receptive field (the area of the input that the layer can see) of the layer without increasing the number of parameters or computation.

The layer expands the filters by inserting zeros between each filter element. The dilation factor determines the step size for sampling the input, or equivalently, the upsampling factor of the filter. It corresponds to an effective filter size of (FilterSize – 1) .* DilationFactor + 1. For example, a 1-by-3 filter with a dilation factor of 2 is equivalent to a 1-by-5 filter with zeros between the elements.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Size of padding to apply to each side of the input, specified as a vector [l r] of two nonnegative integers, where l is the padding applied to the left and r is the padding applied to the right.

When you create a layer, use the Padding name-value argument to specify the padding size.

Data Types: double

This property is read-only.

Method to determine padding size, specified as one of the following:

To specify the layer padding, use the Padding name-value argument.

Data Types: char

This property is read-only.

Value to pad data, specified as one of the following:

PaddingValue Description Example
Scalar Pad with the specified scalar value. [314]→[0031400]
"symmetric-include-edge" Pad using mirrored values of the input, including the edge values. [314]→[1331441]
"symmetric-exclude-edge" Pad using mirrored values of the input, excluding the edge values. [314]→[4131413]
"replicate" Pad using repeated border elements of the input. [314]→[3331444]

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | char | string

This property is read-only.

Number of input channels, specified as one of the following:

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64 | char | string

Parameters and Initialization

Function to initialize the weights, specified as one of the following:

The layer only initializes the weights when the Weights property is empty.

Data Types: char | string | function_handle

Function to initialize the biases, specified as one of these values:

The layer initializes the biases only when the Bias property is empty.

The Convolution1DLayer object stores this property as a character vector or a function handle.

Data Types: char | string | function_handle

Layer weights for the transposed convolution operation, specified as aFilterSize-by-NumChannels-by-numFilters numeric array or [].

The layer weights are learnable parameters. You can specify the initial value of the weights directly using the Weights property of the layer. When you train a network, if the Weights property of the layer is nonempty, then the trainnet function uses the Weights property as the initial value. If the Weights property is empty, then the software uses the initializer specified by the WeightsInitializer property of the layer.

Data Types: single | double

Layer biases for the transposed convolutional operation, specified as a 1-by-NumFilters numeric array or [].

The layer biases are learnable parameters. When you train a neural network, if Bias is nonempty, then the trainnet function uses the Bias property as the initial value. IfBias is empty, then software uses the initializer specified by BiasInitializer.

Data Types: single | double

Learning Rate and Regularization

Learning rate factor for the weights, specified as a nonnegative scalar.

The software multiplies this factor by the global learning rate to determine the learning rate for the weights in this layer. For example, if WeightLearnRateFactor is 2, then the learning rate for the weights in this layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Learning rate factor for the biases, specified as a nonnegative scalar.

The software multiplies this factor by the global learning rate to determine the learning rate for the biases in this layer. For example, if BiasLearnRateFactor is 2, then the learning rate for the biases in the layer is twice the current global learning rate. The software determines the global learning rate based on the settings you specify using the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

L2 regularization factor for the biases, specified as a nonnegative scalar.

The software multiplies this factor by the global L2 regularization factor to determine the L2 regularization for the biases in this layer. For example, if BiasL2Factor is 2, then the L2 regularization for the biases in this layer is twice the global L2 regularization factor. The software determines the global L2 regularization factor based on the settings you specify using the trainingOptions function.

Data Types: single | double | int8 | int16 | int32 | int64 | uint8 | uint16 | uint32 | uint64

Layer

Data Types: char | string

This property is read-only.

Number of inputs to the layer, stored as 1. This layer accepts a single input only.

Data Types: double

This property is read-only.

Input names, stored as {'in'}. This layer accepts a single input only.

Data Types: cell

This property is read-only.

Number of outputs from the layer, stored as 1. This layer has a single output only.

Data Types: double

This property is read-only.

Output names, stored as {'out'}. This layer has a single output only.

Data Types: cell

Examples

collapse all

Create a 1-D convolutional layer with 96 filters of width of 11.

layer = convolution1dLayer(11,96);

Include a 1-D convolutional layer in a Layer array.

layers = [ sequenceInputLayer(3,MinLength=20) layer reluLayer globalMaxPooling1dLayer fullyConnectedLayer(10) softmaxLayer]

layers = 6×1 Layer array with layers:

 1   ''   Sequence Input           Sequence input with 3 dimensions
 2   ''   1-D Convolution          96 11 convolutions with stride 1 and padding [0  0]
 3   ''   ReLU                     ReLU
 4   ''   1-D Global Max Pooling   1-D global max pooling
 5   ''   Fully Connected          10 fully connected layer
 6   ''   Softmax                  softmax

Algorithms

expand all

A 1-D convolutional layer applies sliding convolutional filters to 1-D input. The layer convolves the input by moving the filters along the input and computing the dot product of the weights and the input, then adding a bias term.

The dimension that the layer convolves over depends on the layer input:

Layers in a layer array or layer graph pass data to subsequent layers as formatted dlarray objects. The format of a dlarray object is a string of characters in which each character describes the corresponding dimension of the data. The format consists of one or more of these characters:

For example, you can represent vector sequence data as a 3-D array, in which the first dimension corresponds to the channel dimension, the second dimension corresponds to the batch dimension, and the third dimension corresponds to the time dimension. This representation is in the format "CBT" (channel, batch, time).

You can interact with these dlarray objects in automatic differentiation workflows, such as those for developing a custom layer, using a functionLayer object, or using the forward and predict functions withdlnetwork objects.

This table shows the supported input formats of Convolution1DLayer objects and the corresponding output format. If the software passes the output of the layer to a custom layer that does not inherit from the nnet.layer.Formattable class, or aFunctionLayer object with the Formattable property set to 0 (false), then the layer receives an unformatted dlarray object with dimensions ordered according to the formats in this table. The formats listed here are only a subset. The layer may support additional formats such as formats with additional "S" (spatial) or"U" (unspecified) dimensions.

Input Format Output Format
"SCB" (spatial, channel, batch) "SCB" (spatial, channel, batch)
"CBT" (channel, batch, time) "CBT" (channel, batch, time)
"SCBT" (spatial, channel, batch, time) "SCBT" (spatial, channel, batch, time)

In dlnetwork objects, Convolution1DLayer objects also support these input and output format combinations.

Input Format Output Format
"SC" (spatial, channel) "SC" (spatial, channel)
"CT" (channel, time) "CT" (channel, time)
"SCT" (spatial, channel, time) "SCT" (spatial, channel, time)

References

[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the Difficulty of Training Deep Feedforward Neural Networks." In Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, 249–356. Sardinia, Italy: AISTATS, 2010. https://proceedings.mlr.press/v9/glorot10a/glorot10a.pdf

[2] He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. "Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification." In 2015 IEEE International Conference on Computer Vision (ICCV), 1026–34. Santiago, Chile: IEEE, 2015. https://doi.org/10.1109/ICCV.2015.123

Extended Capabilities

expand all

Usage notes and limitations:

Usage notes and limitations:

Version History

Introduced in R2021b