Depth Anything (original) (raw)

PyTorch

Depth Anything is designed to be a foundation model for monocular depth estimation (MDE). It is jointly trained on labeled and ~62M unlabeled images to enhance the dataset. It uses a pretrained DINOv2 model as an image encoder to inherit its existing rich semantic priors, and DPT as the decoder. A teacher model is trained on unlabeled images to create pseudo-labels. The student model is trained on a combination of the pseudo-labels and labeled images. To improve the student model’s performance, strong perturbations are added to the unlabeled images to challenge the student model to learn more visual knowledge from the image.

You can find all the original Depth Anything checkpoints under the Depth Anything collection.

Click on the Depth Anything models in the right sidebar for more examples of how to apply Depth Anything to different vision tasks.

The example below demonstrates how to obtain a depth map with Pipeline or the AutoModel class.

import torch from transformers import pipeline

pipe = pipeline(task="depth-estimation", model="LiheYoung/depth-anything-base-hf", torch_dtype=torch.bfloat16, device=0) pipe("http://images.cocodataset.org/val2017/000000039769.jpg")["depth"]

Notes

DepthAnythingConfig

class transformers.DepthAnythingConfig

< source >

( backbone_config = None backbone = None use_pretrained_backbone = False use_timm_backbone = False backbone_kwargs = None patch_size = 14 initializer_range = 0.02 reassemble_hidden_size = 384 reassemble_factors = [4, 2, 1, 0.5] neck_hidden_sizes = [48, 96, 192, 384] fusion_hidden_size = 64 head_in_index = -1 head_hidden_size = 32 depth_estimation_type = 'relative' max_depth = None **kwargs )

Parameters

This is the configuration class to store the configuration of a DepthAnythingModel. It is used to instantiate a DepthAnything model according to the specified arguments, defining the model architecture. Instantiating a configuration with the defaults will yield a similar configuration to that of the DepthAnythingLiheYoung/depth-anything-small-hf architecture.

Configuration objects inherit from PretrainedConfig and can be used to control the model outputs. Read the documentation from PretrainedConfig for more information.

Example:

from transformers import DepthAnythingConfig, DepthAnythingForDepthEstimation

configuration = DepthAnythingConfig()

model = DepthAnythingForDepthEstimation(configuration)

configuration = model.config

Serializes this instance to a Python dictionary. Override the default to_dict(). Returns:Dict[str, any]: Dictionary of all the attributes that make up this configuration instance,

DepthAnythingForDepthEstimation

class transformers.DepthAnythingForDepthEstimation

< source >

( config )

Parameters

Depth Anything Model with a depth estimation head on top (consisting of 3 convolutional layers) e.g. for KITTI, NYUv2.

This model inherits from PreTrainedModel. Check the superclass documentation for the generic methods the library implements for all its model (such as downloading or saving, resizing the input embeddings, pruning heads etc.)

This model is also a PyTorch torch.nn.Module subclass. Use it as a regular PyTorch Module and refer to the PyTorch documentation for all matter related to general usage and behavior.

forward

< source >

( pixel_values: FloatTensor labels: typing.Optional[torch.LongTensor] = None output_attentions: typing.Optional[bool] = None output_hidden_states: typing.Optional[bool] = None return_dict: typing.Optional[bool] = None ) → transformers.modeling_outputs.DepthEstimatorOutput or tuple(torch.FloatTensor)

Parameters

A transformers.modeling_outputs.DepthEstimatorOutput or a tuple oftorch.FloatTensor (if return_dict=False is passed or when config.return_dict=False) comprising various elements depending on the configuration (DepthAnythingConfig) and inputs.

The DepthAnythingForDepthEstimation forward method, overrides the __call__ special method.

Although the recipe for forward pass needs to be defined within this function, one should call the Moduleinstance afterwards instead of this since the former takes care of running the pre and post processing steps while the latter silently ignores them.

Examples:

from transformers import AutoImageProcessor, AutoModelForDepthEstimation import torch import numpy as np from PIL import Image import requests

url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw)

image_processor = AutoImageProcessor.from_pretrained("LiheYoung/depth-anything-small-hf") model = AutoModelForDepthEstimation.from_pretrained("LiheYoung/depth-anything-small-hf")

inputs = image_processor(images=image, return_tensors="pt")

with torch.no_grad(): ... outputs = model(**inputs)

post_processed_output = image_processor.post_process_depth_estimation( ... outputs, ... target_sizes=[(image.height, image.width)], ... )

predicted_depth = post_processed_output[0]["predicted_depth"] depth = predicted_depth * 255 / predicted_depth.max() depth = depth.detach().cpu().numpy() depth = Image.fromarray(depth.astype("uint8"))

< > Update on GitHub