Grounding DINO (original) (raw)

Overview

The Grounding DINO model was proposed in Grounding DINO: Marrying DINO with Grounded Pre-Training for Open-Set Object Detection by Shilong Liu, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Chunyuan Li, Jianwei Yang, Hang Su, Jun Zhu, Lei Zhang. Grounding DINO extends a closed-set object detection model with a text encoder, enabling open-set object detection. The model achieves remarkable results, such as 52.5 AP on COCO zero-shot.

The abstract from the paper is the following:

In this paper, we present an open-set object detector, called Grounding DINO, by marrying Transformer-based detector DINO with grounded pre-training, which can detect arbitrary objects with human inputs such as category names or referring expressions. The key solution of open-set object detection is introducing language to a closed-set detector for open-set concept generalization. To effectively fuse language and vision modalities, we conceptually divide a closed-set detector into three phases and propose a tight fusion solution, which includes a feature enhancer, a language-guided query selection, and a cross-modality decoder for cross-modality fusion. While previous works mainly evaluate open-set object detection on novel categories, we propose to also perform evaluations on referring expression comprehension for objects specified with attributes. Grounding DINO performs remarkably well on all three settings, including benchmarks on COCO, LVIS, ODinW, and RefCOCO/+/g. Grounding DINO achieves a 52.5 AP on the COCO detection zero-shot transfer benchmark, i.e., without any training data from COCO. It sets a new record on the ODinW zero-shot benchmark with a mean 26.1 AP.

drawing Grounding DINO overview. Taken from the original paper.

This model was contributed by EduardoPacheco and nielsr. The original code can be found here.

Usage tips

Here’s how to use the model for zero-shot object detection:

import requests

import torch from PIL import Image from transformers import AutoProcessor, AutoModelForZeroShotObjectDetection

model_id = "IDEA-Research/grounding-dino-tiny" device = "cuda"

processor = AutoProcessor.from_pretrained(model_id) model = AutoModelForZeroShotObjectDetection.from_pretrained(model_id).to(device)

image_url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(image_url, stream=True).raw)

text = "a cat. a remote control."

inputs = processor(images=image, text=text, return_tensors="pt").to(device) with torch.no_grad(): ... outputs = model(**inputs)

results = processor.post_process_grounded_object_detection( ... outputs, ... inputs.input_ids, ... box_threshold=0.4, ... text_threshold=0.3, ... target_sizes=[image.size[::-1]] ... ) print(results) [{'boxes': tensor([[344.6959, 23.1090, 637.1833, 374.2751], [ 12.2666, 51.9145, 316.8582, 472.4392], [ 38.5742, 70.0015, 176.7838, 118.1806]], device='cuda:0'), 'labels': ['a cat', 'a cat', 'a remote control'], 'scores': tensor([0.4785, 0.4381, 0.4776], device='cuda:0')}]

Grounded SAM

One can combine Grounding DINO with the Segment Anything model for text-based mask generation as introduced in Grounded SAM: Assembling Open-World Models for Diverse Visual Tasks. You can refer to this demo notebook 🌍 for details.

drawing Grounded SAM overview. Taken from the original repository.

Resources

A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with Grounding DINO. If you’re interested in submitting a resource to be included here, please feel free to open a Pull Request and we’ll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource.

GroundingDinoImageProcessor

class transformers.GroundingDinoImageProcessor

< source >

( format: Union = <AnnotationFormat.COCO_DETECTION: 'coco_detection'> do_resize: bool = True size: Dict = None resample: Resampling = <Resampling.BILINEAR: 2> do_rescale: bool = True rescale_factor: Union = 0.00392156862745098 do_normalize: bool = True image_mean: Union = None image_std: Union = None do_convert_annotations: Optional = None do_pad: bool = True pad_size: Optional = None **kwargs )

Parameters

Constructs a Grounding DINO image processor.

preprocess

< source >

( images: Union annotations: Union = None return_segmentation_masks: bool = None masks_path: Union = None do_resize: Optional = None size: Optional = None resample = None do_rescale: Optional = None rescale_factor: Union = None do_normalize: Optional = None do_convert_annotations: Optional = None image_mean: Union = None image_std: Union = None do_pad: Optional = None format: Union = None return_tensors: Union = None data_format: Union = <ChannelDimension.FIRST: 'channels_first'> input_data_format: Union = None pad_size: Optional = None **kwargs )

Parameters

Preprocess an image or a batch of images so that it can be used by the model.

post_process_object_detection

< source >

( outputs threshold: float = 0.1 target_sizes: Union = None ) β†’ List[Dict]

Parameters

A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model.

Converts the raw output of GroundingDinoForObjectDetection into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format.

GroundingDinoProcessor

class transformers.GroundingDinoProcessor

< source >

( image_processor tokenizer )

Parameters

Constructs a Grounding DINO processor which wraps a Deformable DETR image processor and a BERT tokenizer into a single processor.

GroundingDinoProcessor offers all the functionalities of GroundingDinoImageProcessor andAutoTokenizer. See the docstring of __call__() and decode()for more information.

post_process_grounded_object_detection

< source >

( outputs input_ids box_threshold: float = 0.25 text_threshold: float = 0.25 target_sizes: Union = None ) β†’ List[Dict]

Parameters

A list of dictionaries, each dictionary containing the scores, labels and boxes for an image in the batch as predicted by the model.

Converts the raw output of GroundingDinoForObjectDetection into final bounding boxes in (top_left_x, top_left_y, bottom_right_x, bottom_right_y) format and get the associated text label.

GroundingDinoConfig

class transformers.GroundingDinoConfig

< source >

( backbone_config = None backbone = None use_pretrained_backbone = False use_timm_backbone = False backbone_kwargs = None text_config = None num_queries = 900 encoder_layers = 6 encoder_ffn_dim = 2048 encoder_attention_heads = 8 decoder_layers = 6 decoder_ffn_dim = 2048 decoder_attention_heads = 8 is_encoder_decoder = True activation_function = 'relu' d_model = 256 dropout = 0.1 attention_dropout = 0.0 activation_dropout = 0.0 auxiliary_loss = False position_embedding_type = 'sine' num_feature_levels = 4 encoder_n_points = 4 decoder_n_points = 4 two_stage = True class_cost = 1.0 bbox_cost = 5.0 giou_cost = 2.0 bbox_loss_coefficient = 5.0 giou_loss_coefficient = 2.0 focal_alpha = 0.25 disable_custom_kernels = False max_text_len = 256 text_enhancer_dropout = 0.0 fusion_droppath = 0.1 fusion_dropout = 0.0 embedding_init_target = True query_dim = 4 decoder_bbox_embed_share = True two_stage_bbox_embed_share = False positional_embedding_temperature = 20 init_std = 0.02 layer_norm_eps = 1e-05 **kwargs )

Parameters

This is the configuration class to store the configuration of a GroundingDinoModel. It is used to instantiate a Grounding DINO model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the Grounding DINOIDEA-Research/grounding-dino-tiny architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Examples:

from transformers import GroundingDinoConfig, GroundingDinoModel

configuration = GroundingDinoConfig()

model = GroundingDinoModel(configuration)

configuration = model.config

GroundingDinoModel

class transformers.GroundingDinoModel

< source >

( config: GroundingDinoConfig )

Parameters

The bare Grounding DINO Model (consisting of a backbone and encoder-decoder Transformer) outputting raw hidden-states without any specific head on top.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< source >

( pixel_values: Tensor input_ids: Tensor token_type_ids: Optional = None attention_mask: Optional = None pixel_mask: Optional = None encoder_outputs = None output_attentions = None output_hidden_states = None return_dict = None ) β†’ transformers.models.grounding_dino.modeling_grounding_dino.GroundingDinoModelOutput or tuple(torch.FloatTensor)

Parameters

Returns

transformers.models.grounding_dino.modeling_grounding_dino.GroundingDinoModelOutput or tuple(torch.FloatTensor)

A transformers.models.grounding_dino.modeling_grounding_dino.GroundingDinoModelOutput or a tuple oftorch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GroundingDinoConfig) and inputs.

The GroundingDinoModel forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

from transformers import AutoProcessor, AutoModel from PIL import Image import requests

url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) text = "a cat."

processor = AutoProcessor.from_pretrained("IDEA-Research/grounding-dino-tiny") model = AutoModel.from_pretrained("IDEA-Research/grounding-dino-tiny")

inputs = processor(images=image, text=text, return_tensors="pt") outputs = model(**inputs)

last_hidden_states = outputs.last_hidden_state list(last_hidden_states.shape) [1, 900, 256]

GroundingDinoForObjectDetection

class transformers.GroundingDinoForObjectDetection

< source >

( config: GroundingDinoConfig )

Parameters

Grounding DINO Model (consisting of a backbone and encoder-decoder Transformer) with object detection heads on top, for tasks such as COCO detection.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< source >

( pixel_values: FloatTensor input_ids: LongTensor token_type_ids: LongTensor = None attention_mask: LongTensor = None pixel_mask: Optional = None encoder_outputs: Union = None output_attentions: Optional = None output_hidden_states: Optional = None return_dict: Optional = None labels: List = None ) β†’ transformers.models.grounding_dino.modeling_grounding_dino.GroundingDinoObjectDetectionOutput or tuple(torch.FloatTensor)

Parameters

Returns

transformers.models.grounding_dino.modeling_grounding_dino.GroundingDinoObjectDetectionOutput or tuple(torch.FloatTensor)

A transformers.models.grounding_dino.modeling_grounding_dino.GroundingDinoObjectDetectionOutput or a tuple oftorch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (GroundingDinoConfig) and inputs.

The GroundingDinoForObjectDetection forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

from transformers import AutoProcessor, GroundingDinoForObjectDetection from PIL import Image import requests

url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) text = "a cat."

processor = AutoProcessor.from_pretrained("IDEA-Research/grounding-dino-tiny") model = GroundingDinoForObjectDetection.from_pretrained("IDEA-Research/grounding-dino-tiny")

inputs = processor(images=image, text=text, return_tensors="pt") outputs = model(**inputs)

target_sizes = torch.tensor([image.size[::-1]]) results = processor.image_processor.post_process_object_detection( ... outputs, threshold=0.35, target_sizes=target_sizes ... )[0] for score, label, box in zip(results["scores"], results["labels"], results["boxes"]): ... box = [round(i, 1) for i in box.tolist()] ... print(f"Detected {label.item()} with confidence " f"{round(score.item(), 2)} at location {box}") Detected 1 with confidence 0.45 at location [344.8, 23.2, 637.4, 373.8] Detected 1 with confidence 0.41 at location [11.9, 51.6, 316.6, 472.9]

< > Update on GitHub