Supported Networks, Layers, and Classes - MATLAB & Simulink (original) (raw)

GPU Coder™ supports code generation for series and directed acyclic graph (DAG) neural networks. This page provides a list of supported deep learning networks, layers and classes.

Note that the code generator represents characters in an 8-bit ASCII codeset that the locale setting determines. Therefore, the use of non-ASCII characters in class names, layer names, layer description, or network names might result in errors. For more information, seeEncoding of Characters in Code Generation.

Supported Pretrained Networks

You can train a neural network on either a CPU, a GPU, or multiple GPUs by using the Deep Learning Toolbox™ or use one of the pretrained networks listed in the table and generate CUDA® code.

These pretrained networks, available in Deep Learning Toolbox, are supported for code generation. You can use imagePretrainedNetwork (Deep Learning Toolbox) function to load these pretrained neural networks as dlnetwork (Deep Learning Toolbox) objects for code generation. Note that code generation does not support setting the name-value argumentWeights of imagePretrainedNetwork function to"none". For example, use this code to load a pretrained GoogLeNet neural network.

net = imagePretrainedNetwork("googlenet")

imagePretrainedNetwork Model Name Argument CUDA TensorRT and cuDNN
"alexnet"
"darknet19"
"darknet53"
"densenet201"
"efficientnetb0"
"googlenet"
"inceptionresnetv2"
"inceptionv3"
"mobilenetv2"
"nasnetlarge"
"nasnetmobile"
"resnet18"
"resnet50"
"resnet101"
"shufflenet"
"squeezenet"
"vgg16"
"vgg19"
"xception"

Additionally, you can use these functions to create neural networks for code generation.

Supported Layers

These layers are supported for code generation by GPU Coder for these target deep learning libraries.

Note

In the following tables, the information icon indicates that the network, layer, or class has limited code generation capabilities. You might see errors and unexpected behavior. For more information, see the Extended Capabilities section on the page for that network, layer, or class.

Input Layers

Convolution and Fully Connected Layers

Sequence Layers

Activation Layers

Utility Layers

Pooling and Unpooling Layers

Combination Layers

Transformer Layers

Object Detection Layers

Output Layers

Custom Keras Layers

Layer Name CUDA TensorRT and cuDNN
nnet.keras.layer.ClipLayer (Deep Learning Toolbox)
nnet.keras.layer.FlattenCStyleLayer (Deep Learning Toolbox)
nnet.keras.layer.GlobalAveragePooling2dLayer (Deep Learning Toolbox)
nnet.keras.layer.PreluLayer (Deep Learning Toolbox)
nnet.keras.layer.SigmoidLayer (Deep Learning Toolbox)
nnet.keras.layer.TanhLayer (Deep Learning Toolbox)
nnet.keras.layer.TimeDistributedFlattenCStyleLayer (Deep Learning Toolbox)
nnet.keras.layer.ZeroPadding2dLayer (Deep Learning Toolbox)

Custom ONNX Layers

Layer Name CUDA TensorRT and cuDNN
nnet.onnx.layer.ClipLayer (Deep Learning Toolbox)
nnet.onnx.layer.ElementwiseAffineLayer (Deep Learning Toolbox)
nnet.onnx.layer.FlattenInto2dLayer (Deep Learning Toolbox)
nnet.onnx.layer.FlattenLayer (Deep Learning Toolbox)
nnet.onnx.layer.GlobalAveragePooling2dLayer (Deep Learning Toolbox)
nnet.onnx.layer.IdentityLayer (Deep Learning Toolbox)
nnet.onnx.layer.PreluLayer (Deep Learning Toolbox)
nnet.onnx.layer.SigmoidLayer (Deep Learning Toolbox)
nnet.onnx.layer.TanhLayer (Deep Learning Toolbox)
nnet.onnx.layer.VerifyBatchSizeLayer (Deep Learning Toolbox)

Custom Layers

Layer Name CUDA TensorRT and cuDNN
Custom layers More informationCustom layers, with or without learnable parameters, that you define for your problem.To learn how to define custom deep learning layers, see Define Custom Deep Learning Layers (Deep Learning Toolbox) and Define Custom Deep Learning Layer for Code Generation (Deep Learning Toolbox).For an example on how to generate code for a network with custom layers, see Code Generation for Object Detection Using YOLO v3 Deep Learning Network. Code generation LimitationsThe outputs of the custom layer must be fixed-size arrays.Setting the MallocMode property to"unified" in the coder.gpuConfig requires extra memory copies, which leads to slower performance. It is recommended to use "discrete". For more information on GPU memory allocation, see Discrete and Managed ModescuDNN targets support both row-major and column-major code generation for custom layers. TensorRT targets support only column-major code generation.Custom layers must contain the%#codegen pragma.A sequence network that contains custom layers and LSTM or GRU layers is not supported.You can pass dlarray to custom layers if: The custom layer is indlnetwork.The custom layer is in a DAG or series network and either inherits fromnnet.layer.Formattable or has no backward propagation.For unsupported dlarray methods, you must extract the underlying data from thedlarray, perform the computations, and reconstruct the data into thedlarray object. For example:function Z = predict(layer, X) if coder.target('MATLAB') Z = doPredict(X); else if isdlarray(X) X1 = extractdata(X); Z1 = doPredict(X1); Z = dlarray(Z1); else Z = doPredict(X); end end end

Supported Classes

These classes are supported for code generation by GPU Coder for these target deep learning libraries.

See Also

Functions

Objects

Topics