PhysGen3D: Crafting a Miniature Interactive World from a Single Image (original) (raw)
1Tsinghua University, 2University of Illinois Urbana Champaign, 3Columbia University
PhysGen3D takes a single image as input and synthesizes videos of a miniature interactive world.
Abstract
Envisioning physically plausible outcomes from a single image requires a deep understanding of the world's dynamics. To address this, we introduce PhysGen3D, a novel framework that transforms a single image into an amodal, camera-centric, interactive 3D scene.
By combining advanced image-based geometric and semantic understanding with physics-based simulation, PhysGen3D creates an interactive 3D world from a static image, enabling us to "imagine" and simulate future scenarios based on user input. At its core, PhysGen3D estimates 3D shapes, poses, physical and lighting properties of objects, thereby capturing essential physical attributes that drive realistic object interactions. This framework allows users to specify precise initial conditions, such as object speed or material properties, for enhanced control over generated video outcomes.
We evaluate PhysGen3D's performance against closed-source state-of-the-art (SOTA) image-to-video models, including Pika, Kling, and Gen-3, showing PhysGen3D's capacity to generate videos with realistic physics while offering greater flexibility and fine-grained control. Our results show that PhysGen3D achieves a unique balance of photorealism, physical plausibility, and user-driven interactivity, opening new possibilities for generating dynamic, physics-grounded video from an image.
Pipeline
Figure 1. PhysGen3D's framework pipeline. The system reconstructs 3D scenes from single images and enables interactive physics-based simulation.
Comparison
In this section, we compare the videos generated from our framework with three state-of-the-art (SOTA) I2V models: Gen-3, Pika and Kling. We carefully designed the prompt to describe the motion outcome, and uses motion brush to control Kling. Our framework employs initial velocity control. Results show that our method can follow text instructions while maintaining plausible physics.
"The dog is deflated and collapses."
Input Image
"The book falls and the orange rolls to the front."
Input Image
Dynamics Effects
In this section, we showcase the dynamics effects generated by our framework. We can generate various dynamics from the same input image by changing the initial velocity or editing the material. The results indicate our framework's capability to generate consistent and realistic physical behaviors.
Material
We change the materials of the two objects.
Input Image
Motion
We change the initial velocity of the teddy bear.
Input Image
Applications
Our video generation framework, PhysGen3D, enables a range of exciting applications through its explicit representation. Here are just a few of the compelling use cases our system supports:
Dense 3D Tracking
Input Image
Video Editing
We exchange one object between two scenes.
Input Image 1
Input Image 2
We remove the chair while keeping the toy at initial position.
Input Image
BibTeX
@article{chen2025physgen3d,
author = {Chen, Boyuan and Jiang, Hanxiao and Liu, Shaowei and Gupta, Saurabh and Li, Yunzhu and Zhao, Hao and Wang, Shenlong},
title = {PhysGen3D: Crafting a Miniature Interactive World from a Single Image},
journal = {CVPR},
year = {2025},
}