Paper page - Depth Anything: Unleashing the Power of Large-Scale Unlabeled Data (original) (raw)

Abstract

Depth Anything is a robust monocular depth estimation model built on a large-scale dataset with data augmentation and auxiliary supervision strategies, achieving state-of-the-art results on various datasets and enhancing depth-conditioned ControlNet.

This work presents Depth Anything, a highly practical solution for robustmonocular depth estimation. Without pursuing novel technical modules, we aim to build a simple yet powerful foundation model dealing with any images under any circumstances. To this end, we scale up the dataset by designing a data engineto collect and automatically annotate large-scale unlabeled data (~62M), which significantly enlarges the data coverage and thus is able to reduce the generalization error. We investigate two simple yet effective strategies that make data scaling-up promising. First, a more challenging optimization target is created by leveraging data augmentation tools. It compels the model to actively seek extra visual knowledge and acquire robust representations. Second, an auxiliary supervision is developed to enforce the model to inherit rich semantic priors from pre-trained encoders. We evaluate its zero-shot capabilities extensively, including six public datasets and randomly captured photos. It demonstrates impressive generalization ability. Further, through fine-tuning it with metric depth information from NYUv2 and KITTI, new SOTAs are set. Our better depth model also results in a better depth-conditioned ControlNet. Our models are released at https://github.com/LiheYoung/Depth-Anything.

View arXiv page View PDF Add to collection

Models citing this paper22

apple/coreml-depth-anything-v2-small Depth Estimation• UpdatedJun 24, 2024 • 396• 63

LiheYoung/depth-anything-large-hf Depth Estimation• UpdatedJan 25, 2024 • 188k• 54

LiheYoung/depth_anything_vitl14 Depth Estimation• UpdatedJan 25, 2024 • 17.8k• 42

apple/coreml-depth-anything-small Depth Estimation• UpdatedJun 13, 2024 • 74• 36

Browse 22 models citing this paper

Datasets citing this paper2

roomtour3d/roomtour3d Preview• UpdatedDec 13, 2024 • 16.4k• 1

anonymous-submission-usage/RoomTour3D UpdatedNov 22, 2024 • 20

Spaces citing this paper151

Collections including this paper25

Browse 25 collections that include this paper