DETR (original) (raw)

PyTorch

Overview

The DETR model was proposed in End-to-End Object Detection with Transformers by Nicolas Carion, Francisco Massa, Gabriel Synnaeve, Nicolas Usunier, Alexander Kirillov and Sergey Zagoruyko. DETR consists of a convolutional backbone followed by an encoder-decoder Transformer which can be trained end-to-end for object detection. It greatly simplifies a lot of the complexity of models like Faster-R-CNN and Mask-R-CNN, which use things like region proposals, non-maximum suppression procedure and anchor generation. Moreover, DETR can also be naturally extended to perform panoptic segmentation, by simply adding a mask head on top of the decoder outputs.

The abstract from the paper is the following:

We present a new method that views object detection as a direct set prediction problem. Our approach streamlines the detection pipeline, effectively removing the need for many hand-designed components like a non-maximum suppression procedure or anchor generation that explicitly encode our prior knowledge about the task. The main ingredients of the new framework, called DEtection TRansformer or DETR, are a set-based global loss that forces unique predictions via bipartite matching, and a transformer encoder-decoder architecture. Given a fixed small set of learned object queries, DETR reasons about the relations of the objects and the global image context to directly output the final set of predictions in parallel. The new model is conceptually simple and does not require a specialized library, unlike many other modern detectors. DETR demonstrates accuracy and run-time performance on par with the well-established and highly-optimized Faster RCNN baseline on the challenging COCO object detection dataset. Moreover, DETR can be easily generalized to produce panoptic segmentation in a unified manner. We show that it significantly outperforms competitive baselines.

This model was contributed by nielsr. The original code can be found here.

How DETR works

Here’s a TLDR explaining how DetrForObjectDetection works:

First, an image is sent through a pre-trained convolutional backbone (in the paper, the authors use ResNet-50/ResNet-101). Let’s assume we also add a batch dimension. This means that the input to the backbone is a tensor of shape (batch_size, 3, height, width), assuming the image has 3 color channels (RGB). The CNN backbone outputs a new lower-resolution feature map, typically of shape (batch_size, 2048, height/32, width/32). This is then projected to match the hidden dimension of the Transformer of DETR, which is 256 by default, using ann.Conv2D layer. So now, we have a tensor of shape (batch_size, 256, height/32, width/32). Next, the feature map is flattened and transposed to obtain a tensor of shape (batch_size, seq_len, d_model) =(batch_size, width/32*height/32, 256). So a difference with NLP models is that the sequence length is actually longer than usual, but with a smaller d_model (which in NLP is typically 768 or higher).

Next, this is sent through the encoder, outputting encoder_hidden_states of the same shape (you can consider these as image features). Next, so-called object queries are sent through the decoder. This is a tensor of shape(batch_size, num_queries, d_model), with num_queries typically set to 100 and initialized with zeros. These input embeddings are learnt positional encodings that the authors refer to as object queries, and similarly to the encoder, they are added to the input of each attention layer. Each object query will look for a particular object in the image. The decoder updates these embeddings through multiple self-attention and encoder-decoder attention layers to output decoder_hidden_states of the same shape: (batch_size, num_queries, d_model). Next, two heads are added on top for object detection: a linear layer for classifying each object query into one of the objects or “no object”, and a MLP to predict bounding boxes for each query.

The model is trained using a bipartite matching loss: so what we actually do is compare the predicted classes + bounding boxes of each of the N = 100 object queries to the ground truth annotations, padded up to the same length N (so if an image only contains 4 objects, 96 annotations will just have a “no object” as class and “no bounding box” as bounding box). The Hungarian matching algorithm is used to find an optimal one-to-one mapping of each of the N queries to each of the N annotations. Next, standard cross-entropy (for the classes) and a linear combination of the L1 and generalized IoU loss (for the bounding boxes) are used to optimize the parameters of the model.

DETR can be naturally extended to perform panoptic segmentation (which unifies semantic segmentation and instance segmentation). DetrForSegmentation adds a segmentation mask head on top ofDetrForObjectDetection. The mask head can be trained either jointly, or in a two steps process, where one first trains a DetrForObjectDetection model to detect bounding boxes around both “things” (instances) and “stuff” (background things like trees, roads, sky), then freeze all the weights and train only the mask head for 25 epochs. Experimentally, these two approaches give similar results. Note that predicting boxes is required for the training to be possible, since the Hungarian matching is computed using distances between boxes.

Usage tips

There are three ways to instantiate a DETR model (depending on what you prefer):

Option 1: Instantiate DETR with pre-trained weights for entire model

from transformers import DetrForObjectDetection

model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50")

Option 2: Instantiate DETR with randomly initialized weights for Transformer, but pre-trained weights for backbone

from transformers import DetrConfig, DetrForObjectDetection

config = DetrConfig() model = DetrForObjectDetection(config)

Option 3: Instantiate DETR with randomly initialized weights for backbone + Transformer

config = DetrConfig(use_pretrained_backbone=False) model = DetrForObjectDetection(config)

As a summary, consider the following table:

Task Object detection Instance segmentation Panoptic segmentation
Description Predicting bounding boxes and class labels around objects in an image Predicting masks around objects (i.e. instances) in an image Predicting masks around both objects (i.e. instances) as well as “stuff” (i.e. background things like trees and roads) in an image
Model DetrForObjectDetection DetrForSegmentation DetrForSegmentation
Example dataset COCO detection COCO detection, COCO panoptic COCO panoptic
Format of annotations to provide to DetrImageProcessor {‘image_id’: int, ‘annotations’: List[Dict]} each Dict being a COCO object annotation {‘image_id’: int, ‘annotations’: List[Dict]} (in case of COCO detection) or {‘file_name’: str, ‘image_id’: int, ‘segments_info’: List[Dict]} (in case of COCO panoptic) {‘file_name’: str, ‘image_id’: int, ‘segments_info’: List[Dict]} and masks_path (path to directory containing PNG files of the masks)
Postprocessing (i.e. converting the output of the model to Pascal VOC format) post_process() post_process_segmentation() post_process_segmentation(), post_process_panoptic()
evaluators CocoEvaluator with iou_types="bbox" CocoEvaluator with iou_types="bbox" or "segm" CocoEvaluator with iou_tupes="bbox" or "segm", PanopticEvaluator

In short, one should prepare the data either in COCO detection or COCO panoptic format, then useDetrImageProcessor to create pixel_values, pixel_mask and optionallabels, which can then be used to train (or fine-tune) a model. For evaluation, one should first convert the outputs of the model using one of the postprocessing methods of DetrImageProcessor. These can be provided to either CocoEvaluator or PanopticEvaluator, which allow you to calculate metrics like mean Average Precision (mAP) and Panoptic Quality (PQ). The latter objects are implemented in the original repository. See the example notebooks for more info regarding evaluation.

Resources

A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DETR.

If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.

DetrConfig

class transformers.DetrConfig

< source >

( use_timm_backbone = True backbone_config = None num_channels = 3 num_queries = 100 encoder_layers = 6 encoder_ffn_dim = 2048 encoder_attention_heads = 8 decoder_layers = 6 decoder_ffn_dim = 2048 decoder_attention_heads = 8 encoder_layerdrop = 0.0 decoder_layerdrop = 0.0 is_encoder_decoder = True activation_function = 'relu' d_model = 256 dropout = 0.1 attention_dropout = 0.0 activation_dropout = 0.0 init_std = 0.02 init_xavier_std = 1.0 auxiliary_loss = False position_embedding_type = 'sine' backbone = 'resnet50' use_pretrained_backbone = True backbone_kwargs = None dilation = False class_cost = 1 bbox_cost = 5 giou_cost = 2 mask_loss_coefficient = 1 dice_loss_coefficient = 1 bbox_loss_coefficient = 5 giou_loss_coefficient = 2 eos_coefficient = 0.1 **kwargs )

Parameters

This is the configuration class to store the configuration of a DetrModel. It is used to instantiate a DETR model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the DETRfacebook/detr-resnet-50 architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Examples:

from transformers import DetrConfig, DetrModel

configuration = DetrConfig()

model = DetrModel(configuration)

configuration = model.config

from_backbone_config

< source >

( backbone_config: PretrainedConfig **kwargs ) → DetrConfig

Parameters

An instance of a configuration object

Instantiate a DetrConfig (or a derived class) from a pre-trained backbone model configuration.

DetrImageProcessor

class transformers.DetrImageProcessor

< source >

( format: typing.Union[str, transformers.image_utils.AnnotationFormat] = <AnnotationFormat.COCO_DETECTION: 'coco_detection'> do_resize: bool = True size: typing.Dict[str, int] = None resample: Resampling = <Resampling.BILINEAR: 2> do_rescale: bool = True rescale_factor: typing.Union[int, float] = 0.00392156862745098 do_normalize: bool = True image_mean: typing.Union[float, typing.List[float]] = None image_std: typing.Union[float, typing.List[float]] = None do_convert_annotations: typing.Optional[bool] = None do_pad: bool = True pad_size: typing.Optional[typing.Dict[str, int]] = None **kwargs )

Parameters

Constructs a Detr image processor.

preprocess

< source >

( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']] annotations: typing.Union[dict[str, typing.Union[int, str, list[dict]]], typing.List[dict[str, typing.Union[int, str, list[dict]]]], NoneType] = None return_segmentation_masks: typing.Optional[bool] = None masks_path: typing.Union[str, pathlib.Path, NoneType] = None do_resize: typing.Optional[bool] = None size: typing.Optional[typing.Dict[str, int]] = None resample = None do_rescale: typing.Optional[bool] = None rescale_factor: typing.Union[int, float, NoneType] = None do_normalize: typing.Optional[bool] = None do_convert_annotations: typing.Optional[bool] = None image_mean: typing.Union[float, typing.List[float], NoneType] = None image_std: typing.Union[float, typing.List[float], NoneType] = None do_pad: typing.Optional[bool] = None format: typing.Union[str, transformers.image_utils.AnnotationFormat, NoneType] = None return_tensors: typing.Union[str, transformers.utils.generic.TensorType, NoneType] = None data_format: typing.Union[str, transformers.image_utils.ChannelDimension] = <ChannelDimension.FIRST: 'channels_first'> input_data_format: typing.Union[str, transformers.image_utils.ChannelDimension, NoneType] = None pad_size: typing.Optional[typing.Dict[str, int]] = None **kwargs )

Parameters

Preprocess an image or a batch of images so that it can be used by the model.

post_process_object_detection

< source >

( outputs threshold: float = 0.5 target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None ) → List[Dict]

Parameters

A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model.

Converts the raw output of DetrForObjectDetection into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch.

post_process_semantic_segmentation

< source >

( outputs target_sizes: typing.List[typing.Tuple[int, int]] = None ) → List[torch.Tensor]

Parameters

Returns

List[torch.Tensor]

A list of length batch_size, where each item is a semantic segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes is specified). Each entry of eachtorch.Tensor correspond to a semantic class id.

Converts the output of DetrForSegmentation into semantic segmentation maps. Only supports PyTorch.

post_process_instance_segmentation

< source >

( outputs threshold: float = 0.5 mask_threshold: float = 0.5 overlap_mask_area_threshold: float = 0.8 target_sizes: typing.Optional[typing.List[typing.Tuple[int, int]]] = None return_coco_annotation: typing.Optional[bool] = False ) → List[Dict]

Parameters

A list of dictionaries, one per image, each dictionary containing two keys:

Converts the output of DetrForSegmentation into instance segmentation predictions. Only supports PyTorch.

post_process_panoptic_segmentation

< source >

( outputs threshold: float = 0.5 mask_threshold: float = 0.5 overlap_mask_area_threshold: float = 0.8 label_ids_to_fuse: typing.Optional[typing.Set[int]] = None target_sizes: typing.Optional[typing.List[typing.Tuple[int, int]]] = None ) → List[Dict]

Parameters

A list of dictionaries, one per image, each dictionary containing two keys:

Converts the output of DetrForSegmentation into image panoptic segmentation predictions. Only supports PyTorch.

DetrImageProcessorFast

class transformers.DetrImageProcessorFast

< source >

( **kwargs: typing_extensions.Unpack[transformers.models.detr.image_processing_detr_fast.DetrFastImageProcessorKwargs] )

Parameters

Constructs a fast Detr image processor.

preprocess

< source >

( images: typing.Union[ForwardRef('PIL.Image.Image'), numpy.ndarray, ForwardRef('torch.Tensor'), list['PIL.Image.Image'], list[numpy.ndarray], list['torch.Tensor']] annotations: typing.Union[dict[str, typing.Union[int, str, list[dict]]], typing.List[dict[str, typing.Union[int, str, list[dict]]]], NoneType] = None masks_path: typing.Union[str, pathlib.Path, NoneType] = None **kwargs: typing_extensions.Unpack[transformers.models.detr.image_processing_detr_fast.DetrFastImageProcessorKwargs] )

Parameters

Preprocess an image or batch of images.

post_process_object_detection

< source >

( outputs threshold: float = 0.5 target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None ) → List[Dict]

Parameters

A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model.

Converts the raw output of DetrForObjectDetection into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch.

post_process_semantic_segmentation

< source >

( outputs target_sizes: typing.List[typing.Tuple[int, int]] = None ) → List[torch.Tensor]

Parameters

Returns

List[torch.Tensor]

A list of length batch_size, where each item is a semantic segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes is specified). Each entry of eachtorch.Tensor correspond to a semantic class id.

Converts the output of DetrForSegmentation into semantic segmentation maps. Only supports PyTorch.

post_process_instance_segmentation

< source >

( outputs threshold: float = 0.5 mask_threshold: float = 0.5 overlap_mask_area_threshold: float = 0.8 target_sizes: typing.Optional[typing.List[typing.Tuple[int, int]]] = None return_coco_annotation: typing.Optional[bool] = False ) → List[Dict]

Parameters

A list of dictionaries, one per image, each dictionary containing two keys:

Converts the output of DetrForSegmentation into instance segmentation predictions. Only supports PyTorch.

post_process_panoptic_segmentation

< source >

( outputs threshold: float = 0.5 mask_threshold: float = 0.5 overlap_mask_area_threshold: float = 0.8 label_ids_to_fuse: typing.Optional[typing.Set[int]] = None target_sizes: typing.Optional[typing.List[typing.Tuple[int, int]]] = None ) → List[Dict]

Parameters

A list of dictionaries, one per image, each dictionary containing two keys:

Converts the output of DetrForSegmentation into image panoptic segmentation predictions. Only supports PyTorch.

DetrFeatureExtractor

Preprocess an image or a batch of images.

( outputs threshold: float = 0.5 target_sizes: typing.Union[transformers.utils.generic.TensorType, typing.List[typing.Tuple]] = None ) → List[Dict]

Parameters

A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model.

Converts the raw output of DetrForObjectDetection into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format. Only supports PyTorch.

( outputs target_sizes: typing.List[typing.Tuple[int, int]] = None ) → List[torch.Tensor]

Parameters

A list of length batch_size, where each item is a semantic segmentation map of shape (height, width) corresponding to the target_sizes entry (if target_sizes is specified). Each entry of eachtorch.Tensor correspond to a semantic class id.

Converts the output of DetrForSegmentation into semantic segmentation maps. Only supports PyTorch.

( outputs threshold: float = 0.5 mask_threshold: float = 0.5 overlap_mask_area_threshold: float = 0.8 target_sizes: typing.Optional[typing.List[typing.Tuple[int, int]]] = None return_coco_annotation: typing.Optional[bool] = False ) → List[Dict]

Parameters

A list of dictionaries, one per image, each dictionary containing two keys:

Converts the output of DetrForSegmentation into instance segmentation predictions. Only supports PyTorch.

( outputs threshold: float = 0.5 mask_threshold: float = 0.5 overlap_mask_area_threshold: float = 0.8 label_ids_to_fuse: typing.Optional[typing.Set[int]] = None target_sizes: typing.Optional[typing.List[typing.Tuple[int, int]]] = None ) → List[Dict]

Parameters

A list of dictionaries, one per image, each dictionary containing two keys:

Converts the output of DetrForSegmentation into image panoptic segmentation predictions. Only supports PyTorch.

DETR specific outputs

class transformers.models.detr.modeling_detr.DetrModelOutput

< source >

( last_hidden_state: typing.Optional[torch.FloatTensor] = None past_key_values: typing.Optional[typing.Tuple[typing.Tuple[torch.FloatTensor]]] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor, ...]] = None intermediate_hidden_states: typing.Optional[torch.FloatTensor] = None )

Parameters

Base class for outputs of the DETR encoder-decoder model. This class adds one attribute to Seq2SeqModelOutput, namely an optional stack of intermediate decoder activations, i.e. the output of each decoder layer, each of them gone through a layernorm. This is useful when training the model with auxiliary decoding losses.

class transformers.models.detr.modeling_detr.DetrObjectDetectionOutput

< source >

( loss: typing.Optional[torch.FloatTensor] = None loss_dict: typing.Optional[typing.Dict] = None logits: typing.Optional[torch.FloatTensor] = None pred_boxes: typing.Optional[torch.FloatTensor] = None auxiliary_outputs: typing.Optional[typing.List[typing.Dict]] = None last_hidden_state: typing.Optional[torch.FloatTensor] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None )

Parameters

Output type of DetrForObjectDetection.

class transformers.models.detr.modeling_detr.DetrSegmentationOutput

< source >

( loss: typing.Optional[torch.FloatTensor] = None loss_dict: typing.Optional[typing.Dict] = None logits: typing.Optional[torch.FloatTensor] = None pred_boxes: typing.Optional[torch.FloatTensor] = None pred_masks: typing.Optional[torch.FloatTensor] = None auxiliary_outputs: typing.Optional[typing.List[typing.Dict]] = None last_hidden_state: typing.Optional[torch.FloatTensor] = None decoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None decoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None cross_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_last_hidden_state: typing.Optional[torch.FloatTensor] = None encoder_hidden_states: typing.Optional[typing.Tuple[torch.FloatTensor]] = None encoder_attentions: typing.Optional[typing.Tuple[torch.FloatTensor]] = None )

Parameters

Output type of DetrForSegmentation.

DetrModel

class transformers.DetrModel

< source >

( config: DetrConfig )

Parameters

The bare DETR Model (consisting of a backbone and encoder-decoder Transformer) outputting raw hidden-states without any specific head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< source >

( pixel_values: FloatTensor pixel_mask: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.FloatTensor] = None encoder_outputs: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.detr.modeling_detr.DetrModelOutput or tuple(torch.FloatTensor)

Parameters

A transformers.models.detr.modeling_detr.DetrModelOutput or a tuple oftorch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DetrConfig) and inputs.

The DetrModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

from transformers import AutoImageProcessor, DetrModel from PIL import Image import requests

url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw)

image_processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50") model = DetrModel.from_pretrained("facebook/detr-resnet-50")

inputs = image_processor(images=image, return_tensors="pt")

outputs = model(**inputs)

last_hidden_states = outputs.last_hidden_state list(last_hidden_states.shape) [1, 100, 256]

DetrForObjectDetection

class transformers.DetrForObjectDetection

< source >

( config: DetrConfig )

Parameters

DETR Model (consisting of a backbone and encoder-decoder Transformer) with object detection heads on top, for tasks such as COCO detection.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< source >

( pixel_values: FloatTensor pixel_mask: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.FloatTensor] = None encoder_outputs: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[typing.List[dict]] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.detr.modeling_detr.DetrObjectDetectionOutput or tuple(torch.FloatTensor)

Parameters

A transformers.models.detr.modeling_detr.DetrObjectDetectionOutput or a tuple oftorch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DetrConfig) and inputs.

The DetrForObjectDetection forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

from transformers import AutoImageProcessor, DetrForObjectDetection import torch from PIL import Image import requests

url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw)

image_processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50") model = DetrForObjectDetection.from_pretrained("facebook/detr-resnet-50")

inputs = image_processor(images=image, return_tensors="pt") outputs = model(**inputs)

target_sizes = torch.tensor([image.size[::-1]]) results = image_processor.post_process_object_detection(outputs, threshold=0.9, target_sizes=target_sizes)[ ... 0 ... ]

for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): ... box = [round(i, 2) for i in box.tolist()] ... print( ... f"Detected {model.config.id2label[label.item()]} with confidence " ... f"{round(score.item(), 3)} at location {box}" ... ) Detected remote with confidence 0.998 at location [40.16, 70.81, 175.55, 117.98] Detected remote with confidence 0.996 at location [333.24, 72.55, 368.33, 187.66] Detected couch with confidence 0.995 at location [-0.02, 1.15, 639.73, 473.76] Detected cat with confidence 0.999 at location [13.24, 52.05, 314.02, 470.93] Detected cat with confidence 0.999 at location [345.4, 23.85, 640.37, 368.72]

DetrForSegmentation

class transformers.DetrForSegmentation

< source >

( config: DetrConfig )

Parameters

DETR Model (consisting of a backbone and encoder-decoder Transformer) with a segmentation head on top, for tasks such as COCO panoptic.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< source >

( pixel_values: FloatTensor pixel_mask: typing.Optional[torch.LongTensor] = None decoder_attention_mask: typing.Optional[torch.FloatTensor] = None encoder_outputs: typing.Optional[torch.FloatTensor] = None inputs_embeds: typing.Optional[torch.FloatTensor] = None decoder_inputs_embeds: typing.Optional[torch.FloatTensor] = None labels: typing.Optional[typing.List[dict]] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.models.detr.modeling_detr.DetrSegmentationOutput or tuple(torch.FloatTensor)

Parameters

A transformers.models.detr.modeling_detr.DetrSegmentationOutput or a tuple oftorch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DetrConfig) and inputs.

The DetrForSegmentation forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

import io import requests from PIL import Image import torch import numpy

from transformers import AutoImageProcessor, DetrForSegmentation from transformers.image_transforms import rgb_to_id

url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw)

image_processor = AutoImageProcessor.from_pretrained("facebook/detr-resnet-50-panoptic") model = DetrForSegmentation.from_pretrained("facebook/detr-resnet-50-panoptic")

inputs = image_processor(images=image, return_tensors="pt")

outputs = model(**inputs)

result = image_processor.post_process_panoptic_segmentation(outputs, target_sizes=[(300, 500)])

panoptic_seg = result[0]["segmentation"]

panoptic_segments_info = result[0]["segments_info"]

< > Update on GitHub