quantizationDetails - Display quantization details for a neural network - MATLAB (original) (raw)

This example shows how to display the quantization details for a neural network.

Load the pretrained network. net is a SqueezeNet convolutional neural network that has been retrained using transfer learning to classify images in the MerchData data set.

load squeezedlnetmerch net

net = dlnetwork with properties:

     Layers: [67×1 nnet.cnn.layer.Layer]
Connections: [74×2 table]
 Learnables: [52×3 table]
      State: [0×3 table]
 InputNames: {'data'}
OutputNames: {'prob'}
Initialized: 1

View summary with summary.

Use the quantizationDetails function to see that the network is not quantized.

qDetails_original = quantizationDetails(net)

qDetails_original = struct with fields: IsQuantized: 0 TargetLibrary: "" QuantizedLayerNames: [0×0 string] QuantizedLearnables: [0×3 table]

The IsQuantized field returns 0 (false) because the original network uses the single-precision floating-point data type.

Unzip and load the MerchData images as an image datastore. Define an augmentedImageDatastore object to resize the data for the network, and split the data into calibration and validation data sets to use for quantization.

unzip('MerchData.zip'); imds = imageDatastore('MerchData', ... 'IncludeSubfolders',true, ... 'LabelSource','foldernames'); [calData, valData] = splitEachLabel(imds, 0.7, 'randomized'); aug_calData = augmentedImageDatastore([227 227], calData); aug_valData = augmentedImageDatastore([227 227], valData);

Create a dlquantizer object and specify the network to quantize. Set the execution environment to MATLAB. When you use the MATLAB execution environment, quantization is performed using the fi fixed-point data type which requires a Fixed-Point Designer™ license.

quantObj = dlquantizer(net,'ExecutionEnvironment','MATLAB');

Use the calibrate function to exercise the network with sample inputs and collect range information.

calResults = calibrate(quantObj,aug_calData);

Use the quantize method to quantize the network object and return a simulatable quantized network.

qNet = quantize(quantObj)

qNet = Quantized dlnetwork with properties:

     Layers: [67×1 nnet.cnn.layer.Layer]
Connections: [74×2 table]
 Learnables: [52×3 table]
      State: [0×3 table]
 InputNames: {'data'}
OutputNames: {'prob'}
Initialized: 1

View summary with summary. Use the quantizationDetails function to extract quantization details.

Use the quantizationDetails method to extract the quantization details.

qDetails = quantizationDetails(qNet)

qDetails = struct with fields: IsQuantized: 1 TargetLibrary: "none" QuantizedLayerNames: [53×1 string] QuantizedLearnables: [52×3 table]

Inspect the QuantizedLayerNames field to see a list of the quantized layers.

qDetails.QuantizedLayerNames

ans = 53×1 string "conv1" "relu_conv1" "fire2-squeeze1x1" "fire2-relu_squeeze1x1" "fire2-expand1x1" "fire2-relu_expand1x1" "fire2-expand3x3" "fire2-relu_expand3x3" "fire3-squeeze1x1" "fire3-relu_squeeze1x1" "fire3-expand1x1" "fire3-relu_expand1x1" "fire3-expand3x3" "fire3-relu_expand3x3" "fire4-squeeze1x1" "fire4-relu_squeeze1x1" "fire4-expand1x1" "fire4-relu_expand1x1" "fire4-expand3x3" "fire4-relu_expand3x3" "fire5-squeeze1x1" "fire5-relu_squeeze1x1" "fire5-expand1x1" "fire5-relu_expand1x1" "fire5-expand3x3" "fire5-relu_expand3x3" "fire6-squeeze1x1" "fire6-relu_squeeze1x1" "fire6-expand1x1" "fire6-relu_expand1x1" ⋮

Inspect the QuantizedLearnables field to see the quantized values for learnable parameters in the network.

qDetails.QuantizedLearnables

ans=52×3 table Layer Parameter Value
__________________ _________ __________________

"conv1"               "Weights"    {3×3×3×64   int8 }
"conv1"               "Bias"       {1×1×64     int32}
"fire2-squeeze1x1"    "Weights"    {1×1×64×16  int8 }
"fire2-squeeze1x1"    "Bias"       {1×1×16     int32}
"fire2-expand1x1"     "Weights"    {1×1×16×64  int8 }
"fire2-expand1x1"     "Bias"       {1×1×64     int32}
"fire2-expand3x3"     "Weights"    {3×3×16×64  int8 }
"fire2-expand3x3"     "Bias"       {1×1×64     int32}
"fire3-squeeze1x1"    "Weights"    {1×1×128×16 int8 }
"fire3-squeeze1x1"    "Bias"       {1×1×16     int32}
"fire3-expand1x1"     "Weights"    {1×1×16×64  int8 }
"fire3-expand1x1"     "Bias"       {1×1×64     int32}
"fire3-expand3x3"     "Weights"    {3×3×16×64  int8 }
"fire3-expand3x3"     "Bias"       {1×1×64     int32}
"fire4-squeeze1x1"    "Weights"    {1×1×128×32 int8 }
"fire4-squeeze1x1"    "Bias"       {1×1×32     int32}
  ⋮