Text2Video-Zero (original) (raw)

LoRA

Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video Generators is by Levon Khachatryan, Andranik Movsisyan, Vahram Tadevosyan, Roberto Henschel, Zhangyang Wang, Shant Navasardyan, Humphrey Shi.

Text2Video-Zero enables zero-shot video generation using either:

  1. A textual prompt
  2. A prompt combined with guidance from poses or edges
  3. Video Instruct-Pix2Pix (instruction-guided video editing)

Results are temporally consistent and closely follow the guidance and textual prompts.

teaser-img

The abstract from the paper is:

Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets. In this paper, we introduce a new task of zero-shot text-to-video generation and propose a low-cost approach (without any training or optimization) by leveraging the power of existing text-to-image synthesis methods (e.g., Stable Diffusion), making them suitable for the video domain. Our key modifications include (i) enriching the latent codes of the generated frames with motion dynamics to keep the global scene and the background time consistent; and (ii) reprogramming frame-level self-attention using a new cross-frame attention of each frame on the first frame, to preserve the context, appearance, and identity of the foreground object. Experiments show that this leads to low overhead, yet high-quality and remarkably consistent video generation. Moreover, our approach is not limited to text-to-video synthesis but is also applicable to other tasks such as conditional and content-specialized video generation, and Video Instruct-Pix2Pix, i.e., instruction-guided video editing. As experiments show, our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data.

You can find additional information about Text2Video-Zero on the project page, paper, and original codebase.

Usage example

Text-To-Video

To generate a video from prompt, run the following Python code:

import torch from diffusers import TextToVideoZeroPipeline import imageio

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5" pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")

prompt = "A panda is playing guitar on times square" result = pipe(prompt=prompt).images result = [(r * 255).astype("uint8") for r in result] imageio.mimsave("video.mp4", result, fps=4)

You can change these parameters in the pipeline call:

We can also generate longer videos by doing the processing in a chunk-by-chunk manner:

import torch from diffusers import TextToVideoZeroPipeline import numpy as np

model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5" pipe = TextToVideoZeroPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") seed = 0 video_length = 24
chunk_size = 8 prompt = "A panda is playing guitar on times square"

result = [] chunk_ids = np.arange(0, video_length, chunk_size - 1) generator = torch.Generator(device="cuda") for i in range(len(chunk_ids)): print(f"Processing chunk {i + 1} / {len(chunk_ids)}") ch_start = chunk_ids[i] ch_end = video_length if i == len(chunk_ids) - 1 else chunk_ids[i + 1]

frame_ids = [0] + list(range(ch_start, ch_end))

generator.manual_seed(seed)
output = pipe(prompt=prompt, video_length=len(frame_ids), generator=generator, frame_ids=frame_ids)
result.append(output.images[1:])

result = np.concatenate(result) result = [(r * 255).astype("uint8") for r in result] imageio.mimsave("video.mp4", result, fps=4)

In order to use the SDXL model when generating a video from prompt, use the TextToVideoZeroSDXLPipeline pipeline:

import torch from diffusers import TextToVideoZeroSDXLPipeline

model_id = "stabilityai/stable-diffusion-xl-base-1.0" pipe = TextToVideoZeroSDXLPipeline.from_pretrained( model_id, torch_dtype=torch.float16, variant="fp16", use_safetensors=True ).to("cuda")

Text-To-Video with Pose Control

To generate a video from prompt with additional pose control

  1. Download a demo video
    from huggingface_hub import hf_hub_download
    filename = "assets/poses_skeleton_gifs/dance1_corr.mp4"
    repo_id = "PAIR/Text2Video-Zero"
    video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename)
  2. Read video containing extracted pose images
    from PIL import Image
    import imageio
    reader = imageio.get_reader(video_path, "ffmpeg")
    frame_count = 8
    pose_images = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)]
    To extract pose from actual video, read ControlNet documentation.
  3. Run StableDiffusionControlNetPipeline with our custom attention processor
    import torch
    from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
    from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor
    model_id = "stable-diffusion-v1-5/stable-diffusion-v1-5"
    controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16)
    pipe = StableDiffusionControlNetPipeline.from_pretrained(
    model_id, controlnet=controlnet, torch_dtype=torch.float16
    ).to("cuda")
    pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2))
    pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2))
    latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1)
    prompt = "Darth Vader dancing in a desert"
    result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images
    imageio.mimsave("video.mp4", result, fps=4)

Since our attention processor also works with SDXL, it can be utilized to generate a video from prompt using ControlNet models powered by SDXL:
import torch
from diffusers import StableDiffusionXLControlNetPipeline, ControlNetModel
from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor
controlnet_model_id = 'thibaud/controlnet-openpose-sdxl-1.0'
model_id = 'stabilityai/stable-diffusion-xl-base-1.0'
controlnet = ControlNetModel.from_pretrained(controlnet_model_id, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
model_id, controlnet=controlnet, torch_dtype=torch.float16
).to('cuda')
pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2))
pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2))
latents = torch.randn((1, 4, 128, 128), device="cuda", dtype=torch.float16).repeat(len(pose_images), 1, 1, 1)
prompt = "Darth Vader dancing in a desert"
result = pipe(prompt=[prompt] * len(pose_images), image=pose_images, latents=latents).images
imageio.mimsave("video.mp4", result, fps=4)

Text-To-Video with Edge Control

To generate a video from prompt with additional Canny edge control, follow the same steps described above for pose-guided generation using Canny edge ControlNet model.

Video Instruct-Pix2Pix

To perform text-guided video editing (with InstructPix2Pix):

  1. Download a demo video
    from huggingface_hub import hf_hub_download
    filename = "assets/pix2pix video/camel.mp4"
    repo_id = "PAIR/Text2Video-Zero"
    video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename)
  2. Read video from path
    from PIL import Image
    import imageio
    reader = imageio.get_reader(video_path, "ffmpeg")
    frame_count = 8
    video = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)]
  3. Run StableDiffusionInstructPix2PixPipeline with our custom attention processor
    import torch
    from diffusers import StableDiffusionInstructPix2PixPipeline
    from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor
    model_id = "timbrooks/instruct-pix2pix"
    pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
    pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=3))
    prompt = "make it Van Gogh Starry Night style"
    result = pipe(prompt=[prompt] * len(video), image=video).images
    imageio.mimsave("edited_video.mp4", result, fps=4)

DreamBooth specialization

Methods Text-To-Video, Text-To-Video with Pose Control and Text-To-Video with Edge Controlcan run with custom DreamBooth models, as shown below forCanny edge ControlNet model andAvatar style DreamBooth model:

  1. Download a demo video
    from huggingface_hub import hf_hub_download
    filename = "assets/canny_videos_mp4/girl_turning.mp4"
    repo_id = "PAIR/Text2Video-Zero"
    video_path = hf_hub_download(repo_type="space", repo_id=repo_id, filename=filename)
  2. Read video from path
    from PIL import Image
    import imageio
    reader = imageio.get_reader(video_path, "ffmpeg")
    frame_count = 8
    canny_edges = [Image.fromarray(reader.get_data(i)) for i in range(frame_count)]
  3. Run StableDiffusionControlNetPipeline with custom trained DreamBooth model
    import torch
    from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
    from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero import CrossFrameAttnProcessor
    model_id = "PAIR/text2video-zero-controlnet-canny-avatar"
    controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
    pipe = StableDiffusionControlNetPipeline.from_pretrained(
    model_id, controlnet=controlnet, torch_dtype=torch.float16
    ).to("cuda")
    pipe.unet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2))
    pipe.controlnet.set_attn_processor(CrossFrameAttnProcessor(batch_size=2))
    latents = torch.randn((1, 4, 64, 64), device="cuda", dtype=torch.float16).repeat(len(canny_edges), 1, 1, 1)
    prompt = "oil painting of a beautiful girl avatar style"
    result = pipe(prompt=[prompt] * len(canny_edges), image=canny_edges, latents=latents).images
    imageio.mimsave("video.mp4", result, fps=4)

You can filter out some available DreamBooth-trained models with this link.

Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.

TextToVideoZeroPipeline

class diffusers.TextToVideoZeroPipeline

< source >

( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True )

Parameters

Pipeline for zero-shot text-to-video generation using Stable Diffusion.

This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).

__call__

< source >

( prompt: typing.Union[str, typing.List[str]] video_length: typing.Optional[int] = 8 height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_videos_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.Tensor] = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: typing.Optional[str] = 'tensor' return_dict: bool = True callback: typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None callback_steps: typing.Optional[int] = 1 t0: int = 44 t1: int = 47 frame_ids: typing.Optional[typing.List[int]] = None ) → TextToVideoPipelineOutput

Parameters

The output contains a ndarray of the generated video, when output_type != "latent", otherwise a latent code of generated videos and a list of bools indicating whether the corresponding generated video contains “not-safe-for-work” (nsfw) content..

The call function to the pipeline for generation.

backward_loop

< source >

( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs cross_attention_kwargs = None ) → latents

Parameters

Latents of backward process output at time timesteps[-1].

Perform backward process given list of time steps.

encode_prompt

< source >

( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.Tensor] = None negative_prompt_embeds: typing.Optional[torch.Tensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None )

Parameters

Encodes the prompt into text encoder hidden states.

forward_loop

< source >

( x_t0 t0 t1 generator ) → x_t1

Parameters

Forward process applied to x_t0 from time t0 to t1.

Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance.

TextToVideoZeroSDXLPipeline

class diffusers.TextToVideoZeroSDXLPipeline

< source >

( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None force_zeros_for_empty_prompt: bool = True add_watermarker: typing.Optional[bool] = None )

Parameters

Pipeline for zero-shot text-to-video generation using Stable Diffusion XL.

This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).

__call__

< source >

( prompt: typing.Union[str, typing.List[str]] prompt_2: typing.Union[str, typing.List[str], NoneType] = None video_length: typing.Optional[int] = 8 height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 denoising_end: typing.Optional[float] = None guidance_scale: float = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None negative_prompt_2: typing.Union[str, typing.List[str], NoneType] = None num_videos_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None frame_ids: typing.Optional[typing.List[int]] = None prompt_embeds: typing.Optional[torch.Tensor] = None negative_prompt_embeds: typing.Optional[torch.Tensor] = None pooled_prompt_embeds: typing.Optional[torch.Tensor] = None negative_pooled_prompt_embeds: typing.Optional[torch.Tensor] = None latents: typing.Optional[torch.Tensor] = None motion_field_strength_x: float = 12 motion_field_strength_y: float = 12 output_type: typing.Optional[str] = 'tensor' return_dict: bool = True callback: typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None callback_steps: int = 1 cross_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None guidance_rescale: float = 0.0 original_size: typing.Optional[typing.Tuple[int, int]] = None crops_coords_top_left: typing.Tuple[int, int] = (0, 0) target_size: typing.Optional[typing.Tuple[int, int]] = None t0: int = 44 t1: int = 47 )

Parameters

Function invoked when calling the pipeline for generation.

backward_loop

< source >

( latents timesteps prompt_embeds guidance_scale callback callback_steps num_warmup_steps extra_step_kwargs add_text_embeds add_time_ids cross_attention_kwargs = None guidance_rescale: float = 0.0 ) → latents

Parameters

latents of backward process output at time timesteps[-1]

Perform backward process given list of time steps

encode_prompt

< source >

( prompt: str prompt_2: typing.Optional[str] = None device: typing.Optional[torch.device] = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: typing.Optional[str] = None negative_prompt_2: typing.Optional[str] = None prompt_embeds: typing.Optional[torch.Tensor] = None negative_prompt_embeds: typing.Optional[torch.Tensor] = None pooled_prompt_embeds: typing.Optional[torch.Tensor] = None negative_pooled_prompt_embeds: typing.Optional[torch.Tensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None )

Parameters

Encodes the prompt into text encoder hidden states.

forward_loop

< source >

( x_t0 t0 t1 generator ) → x_t1

Parameters

Forward process applied to x_t0 from time t0 to t1.

Perform DDPM forward process from time t0 to t1. This is the same as adding noise with corresponding variance.

TextToVideoPipelineOutput

class diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput

< source >

( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] )

Parameters

Output class for zero-shot text-to-video pipeline.

< > Update on GitHub