detect - Detect objects using YOLOX object detector - MATLAB (original) (raw)
Detect objects using YOLOX object detector
Since R2023b
Syntax
Description
[bboxes](#mw%5F586188a7-a41c-401c-b123-f7ef4e63fe6c) = detect([detector](#mw%5F4b6dc05e-080f-43a5-ad40-a4ca32d89e86),[I](#mw%5Fd2039502-b23f-4957-b4c9-a7e44d151652))
detects objects within a single image or a batch of images,I
, using a YOLOX object detector,detector
. The detect
function returns the locations of objects detected in the input image as a set of bounding boxes.
Note
This functionality requires Deep Learning Toolbox™ and the Automated Visual Inspection Library for Computer Vision Toolbox™. You can install the Automated Visual Inspection Library for Computer Vision Toolbox from Add-On Explorer. For more information about installing add-ons, see Get and Manage Add-Ons.
[[bboxes](#mw%5F586188a7-a41c-401c-b123-f7ef4e63fe6c),[scores](#mw%5Fd54297f6-5682-4be5-b114-852169a528ef)] = detect([detector](#mw%5F4b6dc05e-080f-43a5-ad40-a4ca32d89e86),[I](#mw%5Fd2039502-b23f-4957-b4c9-a7e44d151652))
also returns the class-specific confidence scores for each bounding box.
[[bboxes](#mw%5F586188a7-a41c-401c-b123-f7ef4e63fe6c),[scores](#mw%5Fd54297f6-5682-4be5-b114-852169a528ef),[labels](#mw%5Fcddac568-3ead-44ff-87b5-f3a673ca447a)] = detect([detector](#mw%5F4b6dc05e-080f-43a5-ad40-a4ca32d89e86),[I](#mw%5Fd2039502-b23f-4957-b4c9-a7e44d151652))
returns a categorical array of labels assigned to the bounding boxes. You must define the labels for object classes during training.
[[bboxes](#mw%5F586188a7-a41c-401c-b123-f7ef4e63fe6c),[scores](#mw%5Fd54297f6-5682-4be5-b114-852169a528ef),[labels](#mw%5Fcddac568-3ead-44ff-87b5-f3a673ca447a),[info](#mw%5Fd8f9056d-3f67-46c6-803e-473ca3ae00b9)] = detect([detector](#mw%5F4b6dc05e-080f-43a5-ad40-a4ca32d89e86),[I](#mw%5Fd2039502-b23f-4957-b4c9-a7e44d151652))
also returns information about the class probabilities and objectness scores for each detection.
[detectionResults](#mw%5F5d0d0600-a08a-4b1c-99ec-2b82f5ff7a03) = detect([detector](#mw%5F4b6dc05e-080f-43a5-ad40-a4ca32d89e86),[ds](#mw%5F01c2a1df-f9f5-4867-9c8a-39af33b43e19))
returns object detection predictions within all the images returned by theread function of the input datastore ds
, as a table.
[___] = detect(___,[roi](#mw%5Fd5cf2663-06e9-4c90-85f3-0eb945e247e2))
detects objects within the rectangular search region roi
, in addition to any combination of arguments from previous syntaxes.
[___] = detect(___,[Name=Value](#namevaluepairarguments))
specifies options using one or more name-value arguments, in addition to any combination of arguments from previous syntaxes.. For example,Threshold=0.25
specifies a detection threshold of 0.25.
Examples
Specify the name of a pretrained YOLOX deep learning network.
Create YOLOX object detector by using the pretrained YOLOX network.
detector = yoloxObjectDetector(name);
Detect objects in a test image by using the pretrained YOLOX object detector.
img = imread("tima.png"); [bboxes,scores,labels] = detect(detector,img,Threshold=0.6)
bboxes = 1×4 single row vector
185.1392 255.8597 119.6875 217.3187
Display the detection results.
detectedImg = insertObjectAnnotation(img,"Rectangle",bboxes,labels); figure imshow(detectedImg)
Load a pretrained YOLOX object detector.
detector = yoloxObjectDetector("small-coco");
Specify the location of test data, then create an image datastore that reads the test data.
location = fullfile(matlabroot,"toolbox","vision","visiondata","vehicles"); imds = imageDatastore(location);
Detect objects in the datastore. Set the Threshold
name-value argument to 0.4
and MiniBatchSize
name-value argument to 32
.
detectionResults = detect(detector,imds,Threshold=0.4,MiniBatchSize=32);
Read an image from the test data set and extract the corresponding detection results.
num = 20; I = readimage(imds,num); bboxes = detectionResults.Boxes{num}; labels = detectionResults.Labels{num}; scores = detectionResults.Scores{num}
scores = 5×1 single column vector
0.6125
0.7789
0.7595
0.6787
0.4322
Perform non-maximal suppression to select strongest bounding boxes from the overlapping clusters. Set the OverlapThreshold
name-value argument to 0.5
.
[bboxes,scores,labels] = selectStrongestBboxMulticlass(bboxes, ... scores,labels,OverlapThreshold=0.5);
Display the detection results.
results = table(bboxes,labels,scores)
results=5×3 table bboxes labels scores ____________________________________ ______ _______
2.0755 69.251 16.852 9.0757 car 0.61246
19.219 70.205 21.257 10.847 car 0.77888
75.165 65.773 25.769 23.227 car 0.75951
96.479 54.215 16.175 24.654 bus 0.67867
1 104.91 225.57 22.663 car 0.43216
detectedImg = insertObjectAnnotation(I,"Rectangle",bboxes,labels); figure imshow(detectedImg)
Load a pretrained YOLOX object detector.
detector = yoloxObjectDetector("small-coco");
Read a test image.
img = imread("aruba.png");
Specify a region of interest (ROI) within the test image.
roiBox = [250 180 300 250];
Detect objects within the specified ROI.
[bboxes,scores,labels] = detect(detector,img,roiBox,Threshold=0.55);
Display the ROI and the detection results.
img = insertObjectAnnotation(img,"Rectangle",roiBox,"ROI",AnnotationColor="yellow"); detectedImg = insertObjectAnnotation(img,"Rectangle",bboxes,labels); figure imshow(detectedImg)
Input Arguments
Test images, specified as a numeric array of size_H_-by-_W_-by-C or_H_-by-_W_-by-_C_-by-B. You must specify real and nonsparse grayscale or RGB images.
- H — Height of the input images.
- W — Width of the input images.
- C — Number of channels. The channel size of each image must be equal to the input channel size of the network. For example, for grayscale images, C must be
1
. For RGB color images, it must be3
. - B — Number of test images in the batch. The
detect
function computes the object detection results for each test image in the batch.
When the test image size does not match the network input size, the detector resizes the input image to the value of the [InputSize](yoloxobjectdetector.html#mw%5F328d360e-892c-4a3e-88d6-8563c258e6d9)
property ofdetector
, unless you specify AutoResize as false
.
Data Types: uint8
| uint16
| int16
| double
| single
Region of interest (ROI) to search, specified as a vector of form [x y width _height_]. The vector specifies the upper-left corner and size of a region, in pixels. If the input data is a datastore, thedetect
function applies the same ROI to every image.
Note
You can specify the ROI to search only when thedetect
function automatically resizes the input test images to the network input size. To useroi
, reset AutoResize to its default value.
Name-Value Arguments
Specify optional pairs of arguments asName1=Value1,...,NameN=ValueN
, where Name
is the argument name and Value
is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.
Example: detect(detector,I,Threshold=0.25)
specifies a detection threshold of 0.25.
Detection threshold, specified as a scalar in the range [0, 1]
. The function removes detections that have scores less than this threshold value. To reduce false positives, increase this value at the possible expense of missing some objects.
Strongest bounding box selection for each detected object, specified as a numeric or logical 1
(true
) or 0
(false
).
true
— Return the strongest bounding box for each object. Thedetect
function calls the selectStrongestBboxMulticlass function, which uses nonmaximal suppression to eliminate overlapping bounding boxes based on their confidence scores.
By default, thedetect
function uses this call to the selectStrongestBboxMulticlass.
selectStrongestBboxMulticlass(bboxes,scores, ...
RatioType="Union", ...
OverlapThreshold=0.45);false
— Return all the detected bounding boxes. You can write a custom function to eliminate overlapping bounding boxes.
Minimum region size containing an object, specified as a vector of the form [height _width_]. Units are in pixels. The minimum region size defines the size of the smallest object that can be detected by the trained network. When the minimum size is known, you can reduce computation time by setting MinSize
to that value.
Maximum region size, specified as a vector of the form [height _width_]. Units are in pixels. The maximum region size defines the size of the largest object that can be detected by the trained network.
By default, MaxSize
is set to the height and width of the input image I. To reduce computation time, set this value to the known maximum region size for the objects that can be detected in the input test image.
Minimum batch size, specified as a positive integer. Adjust theMiniBatchSize
value to help process a large collection of images. The detect
function groups images into minibatches of the specified size and processes them as a batch, which can improve computation efficiency at the cost of increased memory demand. Decrease the minibatch size to use less memory.
Automatic resizing of input images to preserve the aspect ratio, specified as a numeric or logical 1
(true
) or 0
(false
). When AutoResize
is set to 1
(or true
), thedetect
function resizes images to the nearest[InputSize](yoloxobjectdetector.html#mw%5F328d360e-892c-4a3e-88d6-8563c258e6d9)
and the aspect ratio is preserved. Set AutoResize
to logicalfalse
or 0
when performing image tiling-based training or inference at full test image size.
Hardware resource on which to run the detector, specified as one of these values:
"auto"
— Use a GPU if Parallel Computing Toolbox™ is installed and a supported GPU device is available. Otherwise, use the CPU."gpu"
— Use the GPU. To use a GPU, you must have Parallel Computing Toolbox and a CUDA®-enabled NVIDIA® GPU. If a suitable GPU is not available, the function returns an error. For information about the supported compute capabilities, see GPU Computing Requirements (Parallel Computing Toolbox)."cpu"
— Use the CPU.
Performance optimization, specified as one of these options:
"auto"
— Automatically apply a number of compatible optimizations suitable for the input network and hardware resource."mex"
— Compile and execute a MEX function. This option is available only when using a GPU.Using a GPU requires Parallel Computing Toolbox and a CUDA enabled NVIDIA GPU. If Parallel Computing Toolbox or a suitable GPU is not available, then thedetect
function returns an error. For information about the supported compute capabilities, seeGPU Computing Requirements (Parallel Computing Toolbox)."none"
— Disable all acceleration.
Using the Acceleration
options"auto"
and "mex"
can offer performance benefits on subsequent calls with compatible parameters, at the expense of an increased initial run time. Use performance optimization when you plan to call the function multiple times using new input data.
The "mex"
option generates and executes a MEX function based on the network and parameters used in the function call. You can have several MEX functions associated with a single network at one time. Clearing the network variable also clears any MEX functions associated with that network.
The "mex"
option is available only for input data specified as a numeric array, cell array of numeric arrays, table, or image datastore. No other types of datastore support the"mex"
option.
The "mex"
option is available only when you are using a GPU. You must also have a C/C++ compiler installed. For setup instructions, see Set Up Compiler (GPU Coder).
"mex"
acceleration does not support all layers. For a list of supported layers, see Supported Layers (GPU Coder).
Output Arguments
Locations of objects detected within the input image or images, returned as one of these options:
- _M_-by-4 matrix — The input is a single test image. M is the number of bounding boxes detected in an image.
- _B_-by-1 cell array — The input is a batch of images, where B is the number of test images in the batch. Each cell in the array contains an _M_-by-4 matrix specifying the detected bounding boxes.
Detection confidence scores for each bounding box, returned as one of these options:
- _M_-by-1 numeric vector — The input is a single test image. M is the number of bounding boxes detected in the image.
- _B_-by-1 cell array — The input is a batch of test images, where B is the number of test images in the batch. Each cell in the array contains an_M_-element row vector, where each element indicates the detection score for a bounding box in the corresponding image.
A higher score indicates higher confidence in the detection. The confidence score for each detection is a product of the corresponding objectness score and maximum class probability. The objectness score is the probability that the object in the bounding box belongs to a class in the image. The maximum class probability is the largest probability that a detected object in the bounding box belongs to a particular class.
Labels for bounding boxes, returned as one of these options:
- _M_-by-1 categorical vector — The input is a single test image. M is the number of bounding boxes detected in an image.
- _B_-by-1 cell array — The input is an array of test images. B is the number of test images in the batch. Each cell in the array contains an_M_-by-1 categorical vector containing the names of the object classes.
Detection results when the input is a datastore of test images, ds, returned as a table with these columns:
bboxes | scores | labels |
---|---|---|
Predicted bounding boxes, defined in spatial coordinates as an_M_-by-4 numeric matrix with rows of the form [x y w _h_], where: M is the number of axis-aligned rectangles.x andy specify the upper-left corner of the rectangle.w specifies the width of the rectangle, which is its length along the_x_-axis.h specifies the height of the rectangle, which is its length along the_y_-axis. | Class-specific confidence scores in the range[0, 1] for each bounding box, returned as an _M_-by-1 numeric vector. | Predicted object labels assigned to bounding boxes, returned as an _M_-by-1 categorical vector. All categorical data returned by the datastore must contain the same categories. |
Class probabilities and objectness scores of the detections, returned as a structure array with these fields.
ClassProbabilities
— Class probabilities for each of the detections, returned as a B_-by-1 cell array. B is the number of images in the input batch of images,I
. Each cell in the array contains the class probabilities as an_M_-by-N numeric matrix.M is the number of bounding boxes and_N is the number of classes. Each class probability is a numeric scalar, indicating the probability that the detected object in the bounding box belongs to a class in the image.ObjectnessScores
— Objectness scores for each of the detections, returned as a _B_-by-1 cell array. B is the number of images in the input batch of images,I
. Each cell in the array contains the objectness score for each bounding box as an_M_-by-1 numeric vector. M is the number of bounding boxes. Each objectness score is a numeric scalar, indicating the probability that the bounding box contains an object belonging to one of the classes in the image.
Extended Capabilities
Usage notes and limitations:
- To prepare a
yoloxObjectDetector
object for CPU code generation, use vision.loadYOLOXObjectDetector. - The
roi
argument to thedetect
method must be a code generation constant (coder.const()
) and a 1x4 vector. - The
AutoResize
argument to thedetect
method must be a code generation constant (coder.const()
). - Only the
Threshold
,SelectStrongest
,MinSize
,MaxSize
,MiniBatchSize
, andAutoResize
name-value pairs fordetect
are supported.
Usage notes and limitations:
- To prepare a
yoloxObjectDetector
object for GPU code generation, use vision.loadYOLOXObjectDetector. - The
roi
argument to thedetect
method must be a code generation constant (coder.const()
) and a 1x4 vector. - The
AutoResize
argument to thedetect
method must be a code generation constant (coder.const()
). - Only the
Threshold
,SelectStrongest
,MinSize
,MaxSize
, andMiniBatchSize
, andAutoResize
name-value pairs fordetect
are supported.
Version History
Introduced in R2023b
Specify the info output argument to return information about the class probability and objectness score for each detection.