Image-to-Video Generation with PIA (Personalized Image Animator) (original) (raw)

LoRA

Overview

PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models by Yiming Zhang, Zhening Xing, Yanhong Zeng, Youqing Fang, Kai Chen

Recent advancements in personalized text-to-image (T2I) models have revolutionized content creation, empowering non-experts to generate stunning images with unique styles. While promising, adding realistic motions into these personalized images by text poses significant challenges in preserving distinct styles, high-fidelity details, and achieving motion controllability by text. In this paper, we present PIA, a Personalized Image Animator that excels in aligning with condition images, achieving motion controllability by text, and the compatibility with various personalized T2I models without specific tuning. To achieve these goals, PIA builds upon a base T2I model with well-trained temporal alignment layers, allowing for the seamless transformation of any personalized T2I model into an image animation model. A key component of PIA is the introduction of the condition module, which utilizes the condition frame and inter-frame affinity as input to transfer appearance information guided by the affinity hint for individual frame synthesis in the latent space. This design mitigates the challenges of appearance-related image alignment within and allows for a stronger focus on aligning with motion-related guidance.

Project page

Available Pipelines

Pipeline Tasks Demo
PIAPipeline Image-to-Video Generation with PIA

Available checkpoints

Motion Adapter checkpoints for PIA can be found under the OpenMMLab org. These checkpoints are meant to work with any model based on Stable Diffusion 1.5

Usage example

PIA works with a MotionAdapter checkpoint and a Stable Diffusion 1.5 model checkpoint. The MotionAdapter is a collection of Motion Modules that are responsible for adding coherent motion across image frames. These modules are applied after the Resnet and Attention blocks in the Stable Diffusion UNet. In addition to the motion modules, PIA also replaces the input convolution layer of the SD 1.5 UNet model with a 9 channel input convolution layer.

The following example demonstrates how to use PIA to generate a video from a single image.

import torch from diffusers import ( EulerDiscreteScheduler, MotionAdapter, PIAPipeline, ) from diffusers.utils import export_to_gif, load_image

adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter") pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter, torch_dtype=torch.float16)

pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config) pipe.enable_model_cpu_offload() pipe.enable_vae_slicing()

image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" ) image = image.resize((512, 512)) prompt = "cat in a field" negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality"

generator = torch.Generator("cpu").manual_seed(0) output = pipe(image=image, prompt=prompt, generator=generator) frames = output.frames[0] export_to_gif(frames, "pia-animation.gif")

Here are some sample outputs:

cat in a field.
cat in a field

If you plan on using a scheduler that can clip samples, make sure to disable it by setting clip_sample=False in the scheduler as this can also have an adverse effect on generated samples. Additionally, the PIA checkpoints can be sensitive to the beta schedule of the scheduler. We recommend setting this to linear.

Using FreeInit

FreeInit: Bridging Initialization Gap in Video Diffusion Models by Tianxing Wu, Chenyang Si, Yuming Jiang, Ziqi Huang, Ziwei Liu.

FreeInit is an effective method that improves temporal consistency and overall quality of videos generated using video-diffusion-models without any addition training. It can be applied to PIA, AnimateDiff, ModelScope, VideoCrafter and various other video generation models seamlessly at inference time, and works by iteratively refining the latent-initialization noise. More details can be found it the paper.

The following example demonstrates the usage of FreeInit.

import torch from diffusers import ( DDIMScheduler, MotionAdapter, PIAPipeline, ) from diffusers.utils import export_to_gif, load_image

adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter") pipe = PIAPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter)

pipe.enable_free_init(method="butterworth", use_fast_sampling=True)

pipe.enable_model_cpu_offload() pipe.enable_vae_slicing()

pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config) image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" ) image = image.resize((512, 512)) prompt = "cat in a field" negative_prompt = "wrong white balance, dark, sketches,worst quality,low quality"

generator = torch.Generator("cpu").manual_seed(0)

output = pipe(image=image, prompt=prompt, generator=generator) frames = output.frames[0] export_to_gif(frames, "pia-freeinit-animation.gif")

cat in a field.
cat in a field

FreeInit is not really free - the improved quality comes at the cost of extra computation. It requires sampling a few extra times depending on the num_iters parameter that is set when enabling it. Setting the use_fast_sampling parameter to True can improve the overall performance (at the cost of lower quality compared to when use_fast_sampling=False but still better results than vanilla video generation models).

PIAPipeline

class diffusers.PIAPipeline

< source >

( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: typing.Union[diffusers.models.unets.unet_2d_condition.UNet2DConditionModel, diffusers.models.unets.unet_motion_model.UNetMotionModel] scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler] motion_adapter: typing.Optional[diffusers.models.unets.unet_motion_model.MotionAdapter] = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None )

Parameters

Pipeline for text-to-video generation.

This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:

__call__

< source >

( image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] prompt: typing.Union[str, typing.List[str]] = None strength: float = 1.0 num_frames: typing.Optional[int] = 16 height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_videos_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.Tensor] = None prompt_embeds: typing.Optional[torch.Tensor] = None negative_prompt_embeds: typing.Optional[torch.Tensor] = None ip_adapter_image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None ip_adapter_image_embeds: typing.Optional[typing.List[torch.Tensor]] = None motion_scale: int = 0 output_type: typing.Optional[str] = 'pil' return_dict: bool = True cross_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None clip_skip: typing.Optional[int] = None callback_on_step_end: typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None callback_on_step_end_tensor_inputs: typing.List[str] = ['latents'] ) → PIAPipelineOutput or tuple

Parameters

Returns

PIAPipelineOutput or tuple

If return_dict is True, PIAPipelineOutput is returned, otherwise atuple is returned where the first element is a list with the generated frames.

The call function to the pipeline for generation.

Examples:

import torch from diffusers import EulerDiscreteScheduler, MotionAdapter, PIAPipeline from diffusers.utils import export_to_gif, load_image

adapter = MotionAdapter.from_pretrained("openmmlab/PIA-condition-adapter") pipe = PIAPipeline.from_pretrained( ... "SG161222/Realistic_Vision_V6.0_B1_noVAE", motion_adapter=adapter, torch_dtype=torch.float16 ... )

pipe.scheduler = EulerDiscreteScheduler.from_config(pipe.scheduler.config) image = load_image( ... "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/pix2pix/cat_6.png?download=true" ... ) image = image.resize((512, 512)) prompt = "cat in a hat" negative_prompt = "wrong white balance, dark, sketches, worst quality, low quality, deformed, distorted" generator = torch.Generator("cpu").manual_seed(0) output = pipe(image=image, prompt=prompt, negative_prompt=negative_prompt, generator=generator) frames = output.frames[0] export_to_gif(frames, "pia-animation.gif")

encode_prompt

< source >

( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.Tensor] = None negative_prompt_embeds: typing.Optional[torch.Tensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None )

Parameters

Encodes the prompt into text encoder hidden states.

PIAPipelineOutput

class diffusers.pipelines.pia.PIAPipelineOutput

< source >

( frames: typing.Union[torch.Tensor, numpy.ndarray, typing.List[typing.List[PIL.Image.Image]]] )

Parameters

Output class for PIAPipeline.

< > Update on GitHub