detect - Detect objects using RTMDet object detector - MATLAB (original) (raw)

Detect objects using RTMDet object detector

Since R2024b

Syntax

Description

[bboxes](#mw%5Ff343d0d4-c299-4eae-827e-f08562d1f664) = detect([detector](#mw%5Fbf5b434c-2ce2-4b89-beb0-7650f9b8ded6),[I](#mw%5Fc00d6364-4a47-49b9-8d32-b9fe23b7792b)) detects objects within a single image or a batch of images, I, using an RTMDet object detector, detector. The detect function returns the locations of detected objects in the input image as a set of bounding boxes.

Note

This functionality requires Deep Learning Toolbox™ and the Computer Vision Toolbox™ Model for RTMDet Object Detection. You can install the Computer Vision Toolbox Model for RTMDet Object Detection from Add-On Explorer. For more information about installing add-ons, seeGet and Manage Add-Ons.

example

[[bboxes](#mw%5Ff343d0d4-c299-4eae-827e-f08562d1f664),[scores](#mw%5F2a3c80bd-a3b5-4553-9cda-827ea3dad003)] = detect([detector](#mw%5Fbf5b434c-2ce2-4b89-beb0-7650f9b8ded6),[I](#mw%5Fc00d6364-4a47-49b9-8d32-b9fe23b7792b)) returns the class-specific confidence scores for the bounding boxes along with their locations.

[[bboxes](#mw%5Ff343d0d4-c299-4eae-827e-f08562d1f664),[scores](#mw%5F2a3c80bd-a3b5-4553-9cda-827ea3dad003),[labels](#mw%5Fdc3dfa40-a261-4891-9643-845a159aa2aa)] = detect([detector](#mw%5Fbf5b434c-2ce2-4b89-beb0-7650f9b8ded6),[I](#mw%5Fc00d6364-4a47-49b9-8d32-b9fe23b7792b)) also returns a categorical array of labels assigned to the bounding boxes.

[detectionResults](#mw%5F8eaf28b5-1c97-4ea1-a75b-5ef86ed1d29a) = detect([detector](#mw%5Fbf5b434c-2ce2-4b89-beb0-7650f9b8ded6),[ds](#mw%5Fae59c329-381e-4da4-bdaa-d70e07d89e31)) returns a table of predicted objects for all the images in the input datastoreds.

example

[___] = detect(___,[roi](#mw%5F7e2991ed-03b1-49bc-9373-69a8e1cba48e)) detects objects within the rectangular region of interest roi, in addition to any combination of arguments from previous syntaxes.

example

[___] = detect(___,[Name=Value](#namevaluepairarguments)) specifies options using one or more name-value arguments. For example,Threshold=0.5 specifies a detection threshold of 0.5.

Examples

collapse all

Specify the name of a pretrained RTMDet deep learning network.

modelname = "medium-network-coco";

Create an RTMDet object detector by using the pretrained RTMDet network.

detector = rtmdetObjectDetector(modelname);

Read a test image into the workspace, and detect objects in it by using the pretrained RTMDet object detector with a Threshold value of 0.55.

img = imread("boats.png"); [bboxes,scores,labels] = detect(detector,img,Threshold=0.55);

Display the bounding boxes and predicted class labels of the detected objects.

detectedImg = insertObjectAnnotation(img,"rectangle",bboxes,labels); figure imshow(detectedImg)

Figure contains an axes object. The hidden axes object contains an object of type image.

Load a pretrained RTMDet object detector.

detector = rtmdetObjectDetector("tiny-network-coco");

Read the test datastore, and store it as an image datastore object.

location = fullfile(matlabroot,"toolbox","vision","visiondata","stopSignImages"); imds = imageDatastore(location);

Detect objects in the test datastore. Set the Threshold value to 0.6 and MiniBatchSize value to 16.

detectionResults = detect(detector,imds,Threshold=0.6,MiniBatchSize=16);

Read an image from the test data set, and extract the corresponding detection results.

num = 4; I = readimage(imds,num); bboxes = detectionResults.Boxes{num}; labels = detectionResults.Labels{num}; scores = detectionResults.Scores{num};

Display the table of detection results.

results = table(bboxes,labels,scores)

results=7×3 table bboxes labels scores ____________________________________ _____________ _______

 189.3     171.7    40.215    90.083    traffic light    0.70336
     1    470.09    99.577    103.39    car              0.65168
439.71    486.38    128.54    63.771    car              0.75719
758.23    502.11    122.35    63.529    car              0.71745
1476.4    498.05    100.52    63.998    car               0.6535
395.76    207.24    30.013    76.882    traffic light    0.67125
53.328    727.17    1550.7    180.05    car              0.61397

Display the bounding boxes and predicted class labels of the detected objects in the image selected from the test datastore.

detectedImg = insertObjectAnnotation(I,"rectangle",bboxes,labels); figure imshow(detectedImg)

Figure contains an axes object. The hidden axes object contains an object of type image.

Load a pretrained RTMDet object detector.

detector = rtmdetObjectDetector("large-network-coco");

Read a test image.

img = imread("parkinglot_left.png");

Specify a rectangular region of interest (ROI) within the test image.

roiBox = [525 120 50 75];

Detect objects within the specified ROI.

[bboxes,scores,labels] = detect(detector,img,roiBox);

Display the bounding boxes and corresponding class labels of the detected objects within the ROI.

img = insertObjectAnnotation(img,"rectangle",roiBox,"ROI",AnnotationColor="yellow"); detectedImg = insertObjectAnnotation(img,"rectangle",bboxes,labels); figure imshow(detectedImg)

Figure contains an axes object. The hidden axes object contains an object of type image.

Input Arguments

collapse all

Test images, specified as a numeric array of size_H_-by-_W_-by-C or_H_-by-_W_-by-_C_-by-B. You must specify real and nonsparse grayscale or RGB images.

When the test image size does not match the network input size, the detector resizes the input image to the value of the [InputSize](rtmdetobjectdetector.html#mw%5F096532c7-d42c-4de9-921b-6b6a0b66fc2d) property of detector, unless you specify AutoResize as false.

The detector is sensitive to the range of the test image. Therefore, ensure that the test image range is similar to the range of the images used to train the detector. For example, if the detector was trained on uint8 images, rescale this input image to the range [0, 255] by using the im2uint8 or rescale function.

Data Types: uint8 | uint16 | int16 | double | single

Region of interest (ROI) to search, specified as a vector of form [x y width _height_]. The vector specifies the upper-left corner and size of a region, in pixels. If the input data is a datastore, the detect function applies the same ROI to every image.

Note

To specify the ROI to search, the AutoResize value must be true, enabling the function to automatically resize the input test images to the network input size.

Name-Value Arguments

collapse all

Specify optional pairs of arguments asName1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Example: detect(detector,I,Threshold=0.25) specifies a detection threshold of 0.25.

Detection threshold, specified as a scalar in the range [0, 1]. The function removes detections that have scores less than this threshold value. To reduce false positives, at the possible expense of missing some detections, increase this value.

Strongest bounding box selection, specified as a numeric or logical1 (true) or 0 (false).

Minimum region size containing an object, specified as a vector of the form [height _width_]. Units are in pixels. The minimum region size defines the size of the smallest object that the trained network can detect. When you know the minimum size, you can reduce computation time by setting MinSize to that value.

Maximum region size, specified as a vector of the form [height _width_]. Units are in pixels. The maximum region size defines the size of the largest object that the trained network can detect.

By default, MaxSize is set to the height and width of the input image I. To reduce computation time, set this value to the known maximum region size in which to detect objects in the input test image.

Minimum batch size, specified as a positive integer. Adjust theMiniBatchSize value to help process a large collection of images. The detect function groups images into minibatches of the specified size and processes them as a batch, which can improve computational efficiency at the cost of increased memory demand. Decrease the minibatch size to use less memory.

Automatic resizing of input images to preserve the aspect ratio, specified as a numeric or logical 1 (true) or0 (false). When AutoResize is set to true, the detect function resizes images to the nearest `InputSize` dimension, while preserving the aspect ratio. Set AutoResize to logical false or 0 when performing image tiling-based inference.

Hardware resource on which to run the detector, specified as one of these values:

Performance optimization, specified as one of these options:

Using the Acceleration options "auto" and"mex" can offer performance benefits on subsequent calls with compatible parameters, at the expense of an increased initial run time. Use performance optimization when you plan to call the function multiple times using new input data.

The "mex" option generates and executes a MEX function based on the network and parameters used in the function call. You can have several MEX functions associated with a single network at one time. Clearing the network variable also clears any MEX functions associated with that network.

The "mex" option is available only for input data specified as a numeric array, cell array of numeric arrays, table, or image datastore. No other types of datastore support the "mex" option.

The "mex" option is available only when you are using a GPU. You must also have a C/C++ compiler installed. For setup instructions, see Set Up Compiler (GPU Coder).

"mex" acceleration does not support all layers. For a list of supported layers, see Supported Layers (GPU Coder).

Output Arguments

collapse all

Locations of objects detected within the input image or images, returned as one of these options:

Detection confidence scores for each bounding box in the range [0, 1], returned as one of these options:

A higher score indicates higher confidence in the detection. The confidence score for each detection is a product of the corresponding objectness score and maximum class probability. The objectness score is the probability that the object in the bounding box belongs to a class in the image. The maximum class probability is the largest probability that a detected object in the bounding box belongs to a particular class.

Labels for bounding boxes, returned as one of these options:

Detection results when the input is a datastore of test images, ds, returned as a table with these columns:

bboxes scores labels
Predicted bounding boxes, defined in spatial coordinates as an _M_-by-4 numeric matrix with rows of the form [x y w _h_], where: M is the number of axis-aligned rectangles.x and y specify the upper-left corner coordinates of the rectangle, in pixels.w specifies the width of the rectangle, which is its length along the _x_-axis, in pixels.h specifies the height of the rectangle, which is its length along the _y_-axis, in pixels. Class-specific confidence scores for each bounding box, returned as an _M_-by-1 numeric vector with values in the range[0, 1]. Predicted object labels assigned to bounding boxes, returned as an_M_-by-1 categorical vector. All categorical data returned by the datastore must contain the same categories.

Extended Capabilities

Version History

Introduced in R2024b