GitHub - hehao13/CameraCtrl (original) (raw)

CameraCtrl

This repository is the official implementation of CameraCtrl.

This main branch contains the codes and model for CameraCtrl implemented on AnimateDiffV3. For codes and models of CameraCtrl with stable video diffusion, please refer to the svd branch for detail.

CameraCtrl: Enabling Camera Control for Video Diffusion Models
Hao He, Yinghao Xu, Yuwei Guo, Gordon Wetzstein, Bo Dai, Hongsheng Li, Ceyuan Yang

[Paper] [Project Page] [Weights] [HF Demo]

Todo List

Configurations

Environment

conda env create -f environment.yaml conda activate cameractrl

Dataset

- RealEstate10k
  - annotations
    - test.json
    - train.json
    - validation.json
  - pose_files
    - 0000cc6d8b108390.txt
    - 00028da87cc5a4c4.txt
    - 0002b126b0a8a685.txt
    - 0003a9bce989e532.txt
    - 000465ebe46a98d2.txt
    - ...
  - video_clips
    - 00ccbtp2aSQ
    - 00rMZpGSeOI
    - 01bTY_glskw
    - 01PJ3skCZPo
    - 01uaDoluhzo
    - ...

Inferences

Prepare Models

Prepare camera trajectory & prompts

Inference

python -m torch.distributed.launch --nproc_per_node=8 --master_port=25000 inference.py
--out_root ${OUTPUT_PATH}
--ori_model_path ${SD1.5_PATH} \ --unet_subfolder ${SUBFOUDER_NAME}
--motion_module_ckpt ${ADV3_MM_CKPT} \ --pose_adaptor_ckpt ${CAMERACTRL_CKPT}
--model_config configs/train_cameractrl/adv3_256_384_cameractrl_relora.yaml
--visualization_captions assets/cameractrl_prompts.json
--use_specific_seeds
--trajectory_file assets/pose_files/0f47577ab3441480.txt
--n_procs 8

where

The above inference example is used to generate videos in the original T2V model domain. The inference.py script supports generate videos in other domains with image LoRAs (args.image_lora_rank and args.image_lora_ckpt), like the RealEstate10K LoRA or some personalized base models (args.personalized_base_model), like the Realistic Vision. please refer to the code for detail.

Results

Camera Trajectory Video Camera Trajectory Video Camera Trajectory Video
horse1_traj horse1_vid horse2_traj horse2_vid horse3_traj horse3_vid
horse4_traj horse4_vid horse5_traj horse5_vid horse6_traj horse6_vid
Generator Camera Trajectory Video Camera Trajectory Video Camera Trajectory Video
SD1.5 dd1_traj [dd1_vid](/hehao13/CameraCtrl/blob/main/assets/gifs/0aa284f8166e19e4%5FA fish is swimming in the aquarium tank.gif) dd2_traj dd2_vid dd3_traj [dd3_vid](/hehao13/CameraCtrl/blob/main/assets/gifs/massive, multi-tiered elven palace adorned with flowing waterfalls, its cascades forming staircases between ethereal realms%5F2f25826f0d0ef09a.gif)
SD1.5 + RealEstate LoRA dd4_traj dd4_vid dd5_traj dd5_vid dd6_traj dd6_vid
Realistic Vision dd7_traj [dd7_vid](/hehao13/CameraCtrl/blob/main/assets/gifs/photo of coastline, rocks, storm weather, wind, waves, lightning, soft lighting%5F 9d022c4ec370112a.gif) dd8_traj [dd8_vid](/hehao13/CameraCtrl/blob/main/assets/gifs/close up photo of a rabbit, forest, haze, halation, bloom, dramatic atmosphere, centred%5F3f79dc32d575bcdc.gif) dd9_traj dd9_vid
ToonYou dd10_traj [dd10_vid](/hehao13/CameraCtrl/blob/main/assets/gifs/toonyou%5F 62feb0ed164ebcbe.gif) dd11_traj [dd8_vid](/hehao13/CameraCtrl/blob/main/assets/gifs/0f47577ab3441480%5Fmkvd, 1girl, turtleneck sweater, sweater yellow, happy, looking at viewer%5F.gif) dd12_traj [dd12_vid](/hehao13/CameraCtrl/blob/main/assets/gifs/closeup face photo of man in black clothes, night city street, bokeh, fireworks in background%5F4e012c05fdf8f9b3.gif)

Note that, each image paired with the video represents the camera trajectory. Each small tetrahedron on the image represents the position and orientation of the camera for one video frame. Its vertex stands for the camera location, while the base represents the imaging plane of the camera. The red arrows indicate the movement of camera position. The camera rotation can be observed through the orientation of the tetrahedrons.

Training

Step1 (RealEstate10K image LoRA)

Update the below paths to data and pretrained model of the config configs/train_image_lora/realestate_lora.yaml

pretrained_model_path: "[replace with SD1.5 root path]" train_data: root_path: "[replace RealEstate10K root path]"

Other training parameters (lr, epochs, validation settings, etc.) are also included in the config files.

Then, launch the image LoRA training using slurm

./slurm_run.sh ${PARTITION} image_lora 8 configs/train_image_lora/realestate_lora.yaml train_image_lora.py

or PyTorch

./dist_run.sh configs/train_image_lora/realestate_lora.yaml 8 train_image_lora.py

We provide our pretrained checkpoint of the RealEstate10K LoRA model in HuggingFace.

Step2 (Camera control model)

Update the below paths to data and pretrained model of the config configs/train_cameractrl/adv3_256_384_cameractrl_relora.yaml

pretrained_model_path: "[replace with SD1.5 root path]" train_data: root_path: "[replace RealEstate10K root path]" validation_data: root_path: "[replace RealEstate10K root path]" lora_ckpt: "[Replace with RealEstate10k image LoRA ckpt]" motion_module_ckpt: "[Replace with ADV3 motion module]"

Other training parameters (lr, epochs, validation settings, etc.) are also included in the config files.

Then, launch the camera control model training using slurm

./slurm_run.sh ${PARTITION} cameractrl 8 configs/train_cameractrl/adv3_256_384_cameractrl_relora.yaml train_camera_control.py

or PyTorch

./dist_run.sh configs/train_cameractrl/adv3_256_384_cameractrl_relora.yaml 8 train_camera_control.py

Disclaimer

This project is released for academic use. We disclaim responsibility for user-generated content. Users are solely liable for their actions. The project contributors are not legally affiliated with, nor accountable for, users' behaviors. Use the generative model responsibly, adhering to ethical and legal standards.

Acknowledgement

We thank AnimateDiff for their amazing codes and models.

BibTeX

@article{he2024cameractrl, title={CameraCtrl: Enabling Camera Control for Text-to-Video Generation}, author={Hao He and Yinghao Xu and Yuwei Guo and Gordon Wetzstein and Bo Dai and Hongsheng Li and Ceyuan Yang}, journal={arXiv preprint arXiv:2404.02101}, year={2024} }