GitHub - aigc-apps/VideoX-Fun: ๐Ÿ“น A more flexible framework that can generate videos at any resolution and creates videos from images. (original) (raw)

VideoX-Fun

๐Ÿ˜Š Welcome!

CogVideoX-Fun:Hugging Face Spaces

Wan-Fun:Hugging Face Spaces

English | ็ฎ€ไฝ“ไธญๆ–‡ | ๆ—ฅๆœฌ่ชž

Table of Contents

Introduction

VideoX-Fun is a video generation pipeline that can be used to generate AI images and videos, as well as to train baseline and Lora models for Diffusion Transformer. We support direct prediction from pre-trained baseline models to generate videos with different resolutions, durations, and FPS. Additionally, we also support users in training their own baseline and Lora models to perform specific style transformations.

We will support quick pull-ups from different platforms, refer to Quick Start.

What's New:

Function๏ผš

Our UI interface is as follows:ui

Quick Start

1. Cloud usage: AliyunDSW/Docker

a. From AliyunDSW

DSW has free GPU time, which can be applied once by a user and is valid for 3 months after applying.

Aliyun provide free GPU time in Freetier, get it and use in Aliyun PAI-DSW to start CogVideoX-Fun within 5min!

DSW Notebook

b. From ComfyUI

Our ComfyUI is as follows, please refer to ComfyUI README for details.workflow graph

c. From docker

If you are using docker, please make sure that the graphics card driver and CUDA environment have been installed correctly in your machine.

Then execute the following commands in this way:

# pull image
docker pull mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:cogvideox_fun

# enter image
docker run -it -p 7860:7860 --network host --gpus all --security-opt seccomp:unconfined --shm-size 200g mybigpai-public-registry.cn-beijing.cr.aliyuncs.com/easycv/torch_cuda:cogvideox_fun

# clone code
git clone https://github.com/aigc-apps/VideoX-Fun.git

# enter VideoX-Fun's dir
cd VideoX-Fun

# download weights
mkdir models/Diffusion_Transformer
mkdir models/Personalized_Model

# Please use the hugginface link or modelscope link to download the model.
# CogVideoX-Fun
# https://huggingface.co/alibaba-pai/CogVideoX-Fun-V1.1-5b-InP
# https://modelscope.cn/models/PAI/CogVideoX-Fun-V1.1-5b-InP

# Wan
# https://huggingface.co/alibaba-pai/Wan2.1-Fun-V1.1-14B-InP
# https://modelscope.cn/models/PAI/Wan2.1-Fun-V1.1-14B-InP

2. Local install: Environment Check/Downloading/Installation

a. Environment Check

We have verified this repo execution on the following environment:

The detailed of Windows:

The detailed of Linux:

We need about 60GB available on disk (for saving weights), please check!

b. Weights

We'd better place the weights along the specified path:

Via ComfyUI: Put the models into the ComfyUI weights folder ComfyUI/models/Fun_Models/:

๐Ÿ“ฆ ComfyUI/
โ”œโ”€โ”€ ๐Ÿ“‚ models/
โ”‚   โ””โ”€โ”€ ๐Ÿ“‚ Fun_Models/
โ”‚       โ”œโ”€โ”€ ๐Ÿ“‚ CogVideoX-Fun-V1.1-2b-InP/
โ”‚       โ”œโ”€โ”€ ๐Ÿ“‚ CogVideoX-Fun-V1.1-5b-InP/
โ”‚       โ”œโ”€โ”€ ๐Ÿ“‚ Wan2.1-Fun-14B-InP
โ”‚       โ””โ”€โ”€ ๐Ÿ“‚ Wan2.1-Fun-1.3B-InP/

Run its own python file or UI interface:

๐Ÿ“ฆ models/
โ”œโ”€โ”€ ๐Ÿ“‚ Diffusion_Transformer/
โ”‚   โ”œโ”€โ”€ ๐Ÿ“‚ CogVideoX-Fun-V1.1-2b-InP/
โ”‚   โ”œโ”€โ”€ ๐Ÿ“‚ CogVideoX-Fun-V1.1-5b-InP/
โ”‚   โ”œโ”€โ”€ ๐Ÿ“‚ Wan2.1-Fun-14B-InP
โ”‚   โ””โ”€โ”€ ๐Ÿ“‚ Wan2.1-Fun-1.3B-InP/
โ”œโ”€โ”€ ๐Ÿ“‚ Personalized_Model/
โ”‚   โ””โ”€โ”€ your trained trainformer model / your trained lora model (for UI load)

Video Result

Wan2.1-Fun-V1.1-14B-InP && Wan2.1-Fun-V1.1-1.3B-InP

inp_1.mp4 inp_2.mp4 inp_3.mp4 inp_4.mp4
inp_5.mp4 inp_6.mp4 inp_7.mp4 inp_8.mp4

Wan2.1-Fun-V1.1-14B-Control && Wan2.1-Fun-V1.1-1.3B-Control

Generic Control Video + Reference Image:

Reference Image Control Video Wan2.1-Fun-V1.1-14B-Control Wan2.1-Fun-V1.1-1.3B-Control
pose_control.mp4 14b_ref.mp4 1.3b_ref.mp4

Generic Control Video (Canny, Pose, Depth, etc.) and Trajectory Control:

Fun-Trajectory_00003.mp4 Fun-Trajectory-Merge_00003.mp4 Fun_00006.mp4
pose.mp4 canny.mp4 depth.mp4
pose_out.mp4 canny_out.mp4 depth_out.mp4

Wan2.1-Fun-V1.1-14B-Control-Camera && Wan2.1-Fun-V1.1-1.3B-Control-Camera

Pan Up Pan Left Pan Right
Pan_Up.mp4 Pan_Left.mp4 Pan_Right.mp4
Pan Down Pan Up + Pan Left Pan Up + Pan Right
Pan_Down.mp4 Pan_Left_Up.mp4 Pan_Right_Up.mp4

CogVideoX-Fun-V1.1-5B

Resolution-1024

00000005.mp4 00000006.mp4 00000009.mp4 00000010.mp4

Resolution-768

00000001.mp4 00000002.mp4 00000005.mp4 00000006.mp4

Resolution-512

00000036.mp4 00000035.mp4 00000034.mp4 00000033.mp4

CogVideoX-Fun-V1.1-5B-Control

demo_pose.mp4 demo_scribble.mp4 demo_depth.mp4
A young woman with beautiful clear eyes and blonde hair, wearing white clothes and twisting her body, with the camera focused on her face. High quality, masterpiece, best quality, high resolution, ultra-fine, dreamlike. A young woman with beautiful clear eyes and blonde hair, wearing white clothes and twisting her body, with the camera focused on her face. High quality, masterpiece, best quality, high resolution, ultra-fine, dreamlike. A young bear.
00000010.mp4 00000011.mp4 00000012.mp4

How to Use

1. Generation

a. GPU Memory Optimization

Since Wan2.1 has a very large number of parameters, we need to consider memory optimization strategies to adapt to consumer-grade GPUs. We provide GPU_memory_mode for each prediction file, allowing you to choose between model_cpu_offload, model_cpu_offload_and_qfloat8, and sequential_cpu_offload. This solution is also applicable to CogVideoX-Fun generation.

qfloat8 may slightly reduce model performance but saves more GPU memory. If you have sufficient GPU memory, it is recommended to use model_cpu_offload.

b. Using ComfyUI

For details, refer to ComfyUI README.

c. Running Python Files

i. Single-GPU Inference:

ii. Multi-GPU Inference:

When using multi-GPU inference, please make sure to install the xfuser. We recommend installing xfuser==0.4.2 and yunchang==0.6.2.

pip install xfuser==0.4.2 --progress-bar off -i https://mirrors.aliyun.com/pypi/simple/
pip install yunchang==0.6.2 --progress-bar off -i https://mirrors.aliyun.com/pypi/simple/

Please ensure that the product of ulysses_degree and ring_degree equals the number of GPUs being used. For example, if you are using 8 GPUs, you can set ulysses_degree=2 and ring_degree=4, or alternatively ulysses_degree=4 and ring_degree=2.

Compared to ulysses_degree, ring_degree incurs higher communication costs. Therefore, when setting these parameters, you should take into account both the sequence length and the number of heads in the model.

Letโ€™s take 8-GPU parallel inference as an example:

After setting the parameters, run the following command for parallel inference:

torchrun --nproc-per-node=8 examples/wan2.1_fun/predict_t2v.py

d. Using the Web UI

The web UI supports text-to-video, image-to-video, video-to-video, and controlled video generation (Canny, Pose, Depth, etc.). This library currently supports CogVideoX-Fun, Wan2.1, and Wan2.1-Fun. Different models are distinguished by folder names under the examples folder, and their supported features vary. Use them accordingly. Below is an example using CogVideoX-Fun:

2. Model Training

A complete model training pipeline should include data preprocessing and Video DiT training. The training process for different models is similar, and the data formats are also similar:

a. data preprocessing

We have provided a simple demo of training the Lora model through image data, which can be found in the wiki for details.

A complete data preprocessing link for long video segmentation, cleaning, and description can refer to README in the video captions section.

If you want to train a text to image and video generation model. You need to arrange the dataset in this format.

๐Ÿ“ฆ project/
โ”œโ”€โ”€ ๐Ÿ“‚ datasets/
โ”‚   โ”œโ”€โ”€ ๐Ÿ“‚ internal_datasets/
โ”‚       โ”œโ”€โ”€ ๐Ÿ“‚ train/
โ”‚       โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ 00000001.mp4
โ”‚       โ”‚   โ”œโ”€โ”€ ๐Ÿ“„ 00000002.jpg
โ”‚       โ”‚   โ””โ”€โ”€ ๐Ÿ“„ .....
โ”‚       โ””โ”€โ”€ ๐Ÿ“„ json_of_internal_datasets.json

The json_of_internal_datasets.json is a standard JSON file. The file_path in the json can to be set as relative path, as shown in below:

[ { "file_path": "train/00000001.mp4", "text": "A group of young men in suits and sunglasses are walking down a city street.", "type": "video" }, { "file_path": "train/00000002.jpg", "text": "A group of young men in suits and sunglasses are walking down a city street.", "type": "image" }, ..... ]

You can also set the path as absolute path as follow:

[ { "file_path": "/mnt/data/videos/00000001.mp4", "text": "A group of young men in suits and sunglasses are walking down a city street.", "type": "video" }, { "file_path": "/mnt/data/train/00000001.jpg", "text": "A group of young men in suits and sunglasses are walking down a city street.", "type": "image" }, ..... ]

b. Video DiT training

If the data format is relative path during data preprocessing, please set scripts/{model_name}/train.sh as follow.

export DATASET_NAME="datasets/internal_datasets/"
export DATASET_META_NAME="datasets/internal_datasets/json_of_internal_datasets.json"

If the data format is absolute path during data preprocessing, please set scripts/train.sh as follow.

export DATASET_NAME=""
export DATASET_META_NAME="/mnt/data/json_of_internal_datasets.json"

Then, we run scripts/train.sh.

For details on some parameter settings: Wan2.1-Fun can be found in Readme Train and Readme Lora. Wan2.1 can be found in Readme Train and Readme Lora. CogVideoX-Fun can be found in Readme Train and Readme Lora.

Model zoo

1. Wan2.1-Fun

V1.1:

Name Storage Size Hugging Face Model Scope Description
Wan2.1-Fun-V1.1-1.3B-InP 19.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Wan2.1-Fun-V1.1-1.3B text-to-video generation weights, trained at multiple resolutions, supports start-end image prediction.
Wan2.1-Fun-V1.1-14B-InP 47.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Wan2.1-Fun-V1.1-14B text-to-video generation weights, trained at multiple resolutions, supports start-end image prediction.
Wan2.1-Fun-V1.1-1.3B-Control 19.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Wan2.1-Fun-V1.1-1.3B video control weights support various control conditions such as Canny, Depth, Pose, MLSD, etc., supports reference image + control condition-based control, and trajectory control. Supports multi-resolution (512, 768, 1024) video prediction, trained with 81 frames at 16 FPS, supports multilingual prediction.
Wan2.1-Fun-V1.1-14B-Control 47.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Wan2.1-Fun-V1.1-14B video control weights support various control conditions such as Canny, Depth, Pose, MLSD, etc., supports reference image + control condition-based control, and trajectory control. Supports multi-resolution (512, 768, 1024) video prediction, trained with 81 frames at 16 FPS, supports multilingual prediction.
Wan2.1-Fun-V1.1-1.3B-Control-Camera 19.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Wan2.1-Fun-V1.1-1.3B camera lens control weights. Supports multi-resolution (512, 768, 1024) video prediction, trained with 81 frames at 16 FPS, supports multilingual prediction.
Wan2.1-Fun-V1.1-14B-Control-Camera 47.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Wan2.1-Fun-V1.1-14B camera lens control weights. Supports multi-resolution (512, 768, 1024) video prediction, trained with 81 frames at 16 FPS, supports multilingual prediction.

V1.0:

Name Storage Space Hugging Face Model Scope Description
Wan2.1-Fun-1.3B-InP 19.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Wan2.1-Fun-1.3B text-to-video weights, trained at multiple resolutions, supporting start and end frame prediction.
Wan2.1-Fun-14B-InP 47.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Wan2.1-Fun-14B text-to-video weights, trained at multiple resolutions, supporting start and end frame prediction.
Wan2.1-Fun-1.3B-Control 19.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Wan2.1-Fun-1.3B video control weights, supporting various control conditions such as Canny, Depth, Pose, MLSD, etc., and trajectory control. Supports multi-resolution (512, 768, 1024) video prediction at 81 frames, trained at 16 frames per second, with multilingual prediction support.
Wan2.1-Fun-14B-Control 47.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Wan2.1-Fun-14B video control weights, supporting various control conditions such as Canny, Depth, Pose, MLSD, etc., and trajectory control. Supports multi-resolution (512, 768, 1024) video prediction at 81 frames, trained at 16 frames per second, with multilingual prediction support.

2. Wan2.1

Name Hugging Face Model Scope Description
Wan2.1-T2V-1.3B ๐Ÿค—Link ๐Ÿ˜„Link Wanxiang 2.1-1.3B text-to-video weights
Wan2.1-T2V-14B ๐Ÿค—Link ๐Ÿ˜„Link Wanxiang 2.1-14B text-to-video weights
Wan2.1-I2V-14B-480P ๐Ÿค—Link ๐Ÿ˜„Link Wanxiang 2.1-14B-480P image-to-video weights
Wan2.1-I2V-14B-720P ๐Ÿค—Link ๐Ÿ˜„Link Wanxiang 2.1-14B-720P image-to-video weights

3. CogVideoX-Fun

V1.5:

Name Storage Space Hugging Face Model Scope Description
CogVideoX-Fun-V1.5-5b-InP 20.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Our official graph-generated video model is capable of predicting videos at multiple resolutions (512, 768, 1024) and has been trained on 85 frames at a rate of 8 frames per second.
CogVideoX-Fun-V1.5-Reward-LoRAs - ๐Ÿค—Link ๐Ÿ˜„Link The official reward backpropagation technology model optimizes the videos generated by CogVideoX-Fun-V1.5 to better match human preferences. ๏ฝœ

V1.1:

Name Storage Space Hugging Face Model Scope Description
CogVideoX-Fun-V1.1-2b-InP 13.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Our official graph-generated video model is capable of predicting videos at multiple resolutions (512, 768, 1024, 1280) and has been trained on 49 frames at a rate of 8 frames per second.
CogVideoX-Fun-V1.1-5b-InP 20.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Our official graph-generated video model is capable of predicting videos at multiple resolutions (512, 768, 1024, 1280) and has been trained on 49 frames at a rate of 8 frames per second. Noise has been added to the reference image, and the amplitude of motion is greater compared to V1.0.
CogVideoX-Fun-V1.1-2b-Pose 13.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Our official pose-control video model is capable of predicting videos at multiple resolutions (512, 768, 1024, 1280) and has been trained on 49 frames at a rate of 8 frames per second.
CogVideoX-Fun-V1.1-2b-Control 13.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Our official control video model is capable of predicting videos at multiple resolutions (512, 768, 1024, 1280) and has been trained on 49 frames at a rate of 8 frames per second. Supporting various control conditions such as Canny, Depth, Pose, MLSD, etc.
CogVideoX-Fun-V1.1-5b-Pose 20.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Our official pose-control video model is capable of predicting videos at multiple resolutions (512, 768, 1024, 1280) and has been trained on 49 frames at a rate of 8 frames per second.
CogVideoX-Fun-V1.1-5b-Control 20.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Our official control video model is capable of predicting videos at multiple resolutions (512, 768, 1024, 1280) and has been trained on 49 frames at a rate of 8 frames per second. Supporting various control conditions such as Canny, Depth, Pose, MLSD, etc.
CogVideoX-Fun-V1.1-Reward-LoRAs - ๐Ÿค—Link ๐Ÿ˜„Link The official reward backpropagation technology model optimizes the videos generated by CogVideoX-Fun-V1.1 to better match human preferences. ๏ฝœ

(Obsolete) V1.0:

Name Storage Space Hugging Face Model Scope Description
CogVideoX-Fun-2b-InP 13.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Our official graph-generated video model is capable of predicting videos at multiple resolutions (512, 768, 1024, 1280) and has been trained on 49 frames at a rate of 8 frames per second.
CogVideoX-Fun-5b-InP 20.0 GB ๐Ÿค—Link ๐Ÿ˜„Link Our official graph-generated video model is capable of predicting videos at multiple resolutions (512, 768, 1024, 1280) and has been trained on 49 frames at a rate of 8 frames per second.

Reference

License

This project is licensed under the Apache License (Version 2.0).

The CogVideoX-2B model (including its corresponding Transformers module and VAE module) is released under the Apache 2.0 License.

The CogVideoX-5B model (Transformers module) is released under the CogVideoX LICENSE.