Considerations for Supported Layers for Quantization - MATLAB & Simulink (original) (raw)

Layers that are supported for quantization have some limitations. When you encounter these limitations, first consider using the prepareNetwork function to automatically resolve some network architectures that cause errors in the quantization workflow. If you continue to encounter errors, refer to the limitations on this page for information about specific network architectures, layers, and execution environments. For a list of supported layers for each execution environment, see Supported Layers for Quantization.

Benefits of Network Preparation

The prepareNetwork function or the Prepare network for quantization option in the Deep Network Quantizer helps to automatically resolve some network architectures that cause issues in the quantization workflow. Key transformations that the prepareNetwork function performs include:

Network Architecture and Properties

The network you quantize must meet specific criteria with regard to the overall architecture or properties.

Conditionally Quantized Layers

When a layer does not have any learnable parameters, quantization of the layer means that the software uses fixed-point data to perform the computations of the layer. In the MATLAB execution environment, many layers that can be quantized but do not have any learnable parameters are quantized on the condition the input data to the layer is fixed-point.

These layers are conditionally quantized in the MATLAB execution environment:

For layers with multiple inputs, like additionLayer or multiplicationLayer, all inputs must be fixed-point in order for the layer to perform the layer with fixed-point computations.

Execution Environment Specific Limitations

Some limitations only apply to specific execution environments.

MATLAB Execution Environment

To use the MATLAB execution environment for calibration, quantization, and validation, your network must be a dlnetwork object. Use theprepareNetwork function or the Network Preparation step in the Deep Network Quantizer app to convert your network to a dlnetwork.

This table outlines limitations in the MATLAB execution environment for specific layers in the quantization workflow. For additional limitations that may affect code generation, see List of Deep Learning Layer Blocks and Subsystems.

Layer Names Limitations in MATLAB Execution Environment
averagePooling1dLayer and averagePooling2dLayer The PaddingValue value must be0.
batchNormalizationLayer If the batchNormalizationLayer layer follows aconvolution1dLayer, convolution2dLayer, groupedConvolution2dLayer, or fullyConnectedLayer layer, fuse thebatchNormalizationLayer layer to the convolutional or fully connected layer using the prepareNetwork function to support quantization.For other network architectures, the prepareNetwork function replaces thebatchNormalizationLayer layer with a convolutional or fully connected layer.Batch normalization layers that are not fused or replaced are not supported for quantization.
lstmLayer and lstmProjectedLayer The StateActivationFunction value must be "tanh" or"softsign".
Rescale-Symmetric 1D, Rescale-Symmetric 2D, Rescale-Zero-One 1D, and Rescale-Zero-One 2D The Output minimum and Output maximum values must not be equal.
Zerocenter 1D, Zerocenter 2D, Zscore 1D, and Zscore 2D The Mean value of the correspondingimageInputLayer, featureInputLayer, or sequenceInputLayer must be nonzero.

GPU Execution Environment

This table outlines limitations in the GPU execution environment for specific layers in the quantization workflow. For additional limitations that affect code generation, see Supported Networks, Layers, and Classes (GPU Coder).

FPGA Execution Environment

This table outlines limitations in the FPGA execution environment for specific layers in the quantization workflow. For additional limitations that affect code generation, see Supported Networks, Boards, and Tools (Deep Learning HDL Toolbox).

See Also

dlquantizer | Deep Network Quantizer

Topics