yolov2Layers - (To be removed) Create YOLO v2 object detection network - MATLAB (original) (raw)
(To be removed) Create YOLO v2 object detection network
Syntax
Description
[lgraph](#mw%5Fb1e7b78e-2ce3-4d2f-ad8d-f5acd725f170%5Fsep%5Fmw%5Fe4ab0bbb-e016-4621-9133-db2923ee27da) = yolov2Layers([imageSize](#mw%5Fb1e7b78e-2ce3-4d2f-ad8d-f5acd725f170%5Fsep%5Fmw%5F711e980e-9b2e-4a7a-8c35-88b0ac011ca0),[numClasses](#mw%5Fb1e7b78e-2ce3-4d2f-ad8d-f5acd725f170%5Fsep%5Fmw%5F8051ce5f-ba2e-4675-96bd-7ea0a705e561),[anchorBoxes](#mw%5Fb1e7b78e-2ce3-4d2f-ad8d-f5acd725f170%5Fsep%5Fmw%5F959dffa1-0bb1-4127-98a7-d33d726244ba),[network](#mw%5Fb1e7b78e-2ce3-4d2f-ad8d-f5acd725f170%5Fsep%5Fmw%5Fe3840338-0eed-4d24-9d95-f060bf986050),[featureLayer](#mw%5Fb1e7b78e-2ce3-4d2f-ad8d-f5acd725f170%5Fsep%5Fmw%5Ffff48de4-e901-43ae-bef5-0327543908b6))
creates a YOLO v2 object detection network and returns it as a LayerGraph
object.
[lgraph](#mw%5Fb1e7b78e-2ce3-4d2f-ad8d-f5acd725f170%5Fsep%5Fmw%5Fe4ab0bbb-e016-4621-9133-db2923ee27da) = yolov2Layers(___,"ReorgLayerSource",[reorgLayer](#mw%5F5ade83df-5104-4b8d-9988-012d4fe44840))
adds a reorganization layer to the YOLO v2 network architecture following the layerreorgLayer
,
Examples
Specify the size of the input image for training the network.
Specify the number of object classes the network has to detect.
Define the anchor boxes.
anchorBoxes = [1 1;4 6;5 3;9 6];
Specify the pretrained ResNet-50 network as the base network for YOLO v2. To use this pretrained network, you need to install the "Deep Learning Toolbox Model for ResNet-50 Network" support package.
Analyze the network architecture to view all the network layers.
Specify the network layer to be used for feature extraction. You can choose any layer except the fully connected layer as feature layer.
featureLayer = 'activation_49_relu';
Create the YOLO v2 object detection network. The network is returned as aLayerGraph
object.
lgraph = yolov2Layers(imageSize,numClasses,anchorBoxes,network,featureLayer);
Analyze the YOLO v2 network architecture. The layers succeeding the feature layer are removed. A series of convolution, ReLU, and batch normalization layers along with the YOLO v2 transform and YOLO v2 output layers are added to the feature layer of the base network.
Input Arguments
Size of an input image, specified as one of these values:
- Two-element vector of form [H _W_] - For a grayscale image of size_H_-by-W
- Three-element vector of form [H W 3] - For an RGB color image of size_H_-by-W
Number of object classes, specified as a positive integer.
Anchor boxes, specified as an _M_-by-2 matrix defining the size and the number of anchor boxes. Each row in the _M_-by-2 matrix denotes the size of the anchor box in the form of [height _width_]. M denotes the number of anchor boxes. This input sets the AnchorBoxes
property of the output layer.
The size of each anchor box is determined based on the scale and aspect ratio of different object classes present in input training data. Also, the size of each anchor box must be smaller than or equal to the size of the input image. You can use the clustering approach for estimating anchor boxes from the training data. For more information, see Estimate Anchor Boxes from Training Data.
Pretrained convolutional neural network, specified as an LayerGraph (Deep Learning Toolbox),DAGNetwork (Deep Learning Toolbox), orSeriesNetwork (Deep Learning Toolbox) object. This pretrained convolutional neural network is used as the base for the YOLO v2 object detection network. For details on pretrained networks in MATLAB®, see Pretrained Deep Neural Networks (Deep Learning Toolbox).
Name of feature layer, specified as a character vector or a string scalar. The name of one of the deeper layers in the network to be used for feature extraction. The features extracted from this layer are given as input to the YOLO v2 object detection subnetwork. You can use the analyzeNetwork (Deep Learning Toolbox) function to view the names of the layers in the input network.
Note
You can specify any network layer except the fully connected layer as the feature layer.
Name of reorganization layer, specified as a character vector or a string scalar. The name of one of the deeper layers in the network to be used as input to the reorganization layer. You can use the analyzeNetwork (Deep Learning Toolbox) function to view the names of the layers in the inputnetwork
. The reorganization layer is the pass-through layer that reorganizes the dimension of low layer features to facilitate concatenation with high layer features.
Note
The input to the reorganization layer must be from any one of the network layers that lie above the feature layer.
Output Arguments
YOLO v2 object detection network, returned as a LayerGraph
object.
Note
The default value for the Normalization
property of the image input layer in the returned lgraph
object is set to theNormalization
property of the base network specified innetwork.
Algorithms
The yolov2Layers
function creates a YOLO v2 network, which represents the network architecture for YOLO v2 object detector. Use the trainYOLOv2ObjectDetector function to train the YOLO v2 network for object detection. The function returns an object that generates the network architecture for YOLO v2 object detection network presented in [1] and [2].
yolov2Layers
uses a pretrained neural network as the base network to which it adds a detection subnetwork required for creating a YOLO v2 object detection network. Given a base network, yolov2Layers
removes all the layers succeeding the feature layer in the base network and adds the detection subnetwork. The detection subnetwork comprises of groups of serially connected convolution, ReLU, and batch normalization layers. The YOLO v2 transform layer and YOLO v2 output layer are added to the detection subnetwork. If you specify the name-value argument "ReorgLayerSource"
, the YOLO v2 network concatenates the output of reorganization layer with the output of feature layer.
For information on creating a custom YOLO v2 network layer-by-layer, see Create Custom YOLO v2 Object Detection Network.
References
[1] Joseph. R, S. K. Divvala, R. B. Girshick, and F. Ali. "You Only Look Once: Unified, Real-Time Object Detection." In_Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)_, pp. 779–788. Las Vegas, NV: CVPR, 2016.
[2] Joseph. R and F. Ali. "YOLO 9000: Better, Faster, Stronger." In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6517–6525. Honolulu, HI: CVPR, 2017.
Extended Capabilities
Usage notes and limitations:
For code generation, you must first create a YOLO v2 network by using theyolov2Layers
function. Then, use theyolov2ObjectDetector
function on the resulting lgraph
object to train the network for object detection. Once the network is trained and evaluated, you can generate code for the yolov2ObjectDetector
object using GPU Coder™.
Version History
Introduced in R2019a
The yolov2Layers
function will be removed in a future release. When you call the yolov2Layers
function, it issues a warning that it will be removed. Create a YOLO v2 object detection network by using the yolov2ObjectDetector object instead.
The yolov2Layers
function will be removed in a future release. Use the yolov2ObjectDetector function instead. Theyolov2ObjectDetector
function creates a pretrained or untrained YOLO v2 object detector, and stores the network as a dlnetwork (Deep Learning Toolbox) object.
To update your code:
- Define your network as a
dlnetwork
object. You can load a pretrained feature extraction network by using the imagePretrainedNetwork (Deep Learning Toolbox) function. - Select a feature extraction layer.
- Create a
yolov2ObjectDetector
object, specifying the base network and the name of the feature extraction layer. Also, specify the names of the classes and the anchor boxes as inputs for training the network.
For more information, see Getting Started with YOLO v2.