Load pipelines (original) (raw)

Diffusion systems consist of multiple components like parameterized models and schedulers that interact in complex ways. That is why we designed the DiffusionPipeline to wrap the complexity of the entire diffusion system into an easy-to-use API. At the same time, the DiffusionPipeline is entirely customizable so you can modify each component to build a diffusion system for your use case.

This guide will show you how to load:

Load a pipeline

Skip to the DiffusionPipeline explained section if you’re interested in an explanation about how the DiffusionPipeline class works.

There are two ways to load a pipeline for a task:

  1. Load the generic DiffusionPipeline class and allow it to automatically detect the correct pipeline class from the checkpoint.
  2. Load a specific pipeline class for a specific task.

generic pipeline

specific pipeline

The DiffusionPipeline class is a simple and generic way to load the latest trending diffusion model from the Hub. It uses the from_pretrained() method to automatically detect the correct pipeline class for a task from the checkpoint, downloads and caches all the required configuration and weight files, and returns a pipeline ready for inference.

from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", use_safetensors=True)

This same checkpoint can also be used for an image-to-image task. The DiffusionPipeline class can handle any task as long as you provide the appropriate inputs. For example, for an image-to-image task, you need to pass an initial image to the pipeline.

from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", use_safetensors=True)

init_image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/img2img-init.png") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipeline("Astronaut in a jungle, cold color palette, muted colors, detailed, 8k", image=init_image).images[0]

Use the Space below to gauge a pipeline’s memory requirements before you download and load it to see if it runs on your hardware.

Specifying Component-Specific Data Types

You can customize the data types for individual sub-models by passing a dictionary to the torch_dtype parameter. This allows you to load different components of a pipeline in different floating point precisions. For instance, if you want to load the transformer with torch.bfloat16 and all other components with torch.float16, you can pass a dictionary mapping:

from diffusers import HunyuanVideoPipeline import torch

pipe = HunyuanVideoPipeline.from_pretrained( "hunyuanvideo-community/HunyuanVideo", torch_dtype={"transformer": torch.bfloat16, "default": torch.float16}, ) print(pipe.transformer.dtype, pipe.vae.dtype)

If a component is not explicitly specified in the dictionary and no default is provided, it will be loaded with torch.float32.

Local pipeline

To load a pipeline locally, use git-lfs to manually download a checkpoint to your local disk.

git-lfs install git clone https://huggingface.co/stable-diffusion-v1-5/stable-diffusion-v1-5

This creates a local folder, ./stable-diffusion-v1-5, on your disk and you should pass its path to from_pretrained().

from diffusers import DiffusionPipeline

stable_diffusion = DiffusionPipeline.from_pretrained("./stable-diffusion-v1-5", use_safetensors=True)

The from_pretrained() method won’t download files from the Hub when it detects a local path, but this also means it won’t download and cache the latest changes to a checkpoint.

Customize a pipeline

You can customize a pipeline by loading different components into it. This is important because you can:

For example, let’s customize the default stabilityai/stable-diffusion-xl-base-1.0 checkpoint with:

from diffusers import StableDiffusionXLPipeline, HeunDiscreteScheduler, AutoencoderKL import torch

scheduler = HeunDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", subfolder="scheduler") vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16, use_safetensors=True)

Now pass the new scheduler and VAE to the StableDiffusionXLPipeline.

pipeline = StableDiffusionXLPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", scheduler=scheduler, vae=vae, torch_dtype=torch.float16, variant="fp16", use_safetensors=True ).to("cuda")

Reuse a pipeline

When you load multiple pipelines that share the same model components, it makes sense to reuse the shared components instead of reloading everything into memory again, especially if your hardware is memory-constrained. For example:

  1. You generated an image with the StableDiffusionPipeline but you want to improve its quality with the StableDiffusionSAGPipeline. Both of these pipelines share the same pretrained model, so it’d be a waste of memory to load the same model twice.
  2. You want to add a model component, like a MotionAdapter, to AnimateDiffPipeline which was instantiated from an existing StableDiffusionPipeline. Again, both pipelines share the same pretrained model, so it’d be a waste of memory to load an entirely new pipeline again.

With the DiffusionPipeline.from_pipe() API, you can switch between multiple pipelines to take advantage of their different features without increasing memory-usage. It is similar to turning on and off a feature in your pipeline.

To switch between tasks (rather than features), use the from_pipe() method with the AutoPipeline class, which automatically identifies the pipeline class based on the task (learn more in the AutoPipeline tutorial).

Let’s start with a StableDiffusionPipeline and then reuse the loaded model components to create a StableDiffusionSAGPipeline to increase generation quality. You’ll use the StableDiffusionPipeline with an IP-Adapter to generate a bear eating pizza.

from diffusers import DiffusionPipeline, StableDiffusionSAGPipeline import torch import gc from diffusers.utils import load_image from accelerate.utils import compute_module_sizes

image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/load_neg_embed.png")

pipe_sd = DiffusionPipeline.from_pretrained("SG161222/Realistic_Vision_V6.0_B1_noVAE", torch_dtype=torch.float16) pipe_sd.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") pipe_sd.set_ip_adapter_scale(0.6) pipe_sd.to("cuda")

generator = torch.Generator(device="cpu").manual_seed(33) out_sd = pipe_sd( prompt="bear eats pizza", negative_prompt="wrong white balance, dark, sketches,worst quality,low quality", ip_adapter_image=image, num_inference_steps=50, generator=generator, ).images[0] out_sd

For reference, you can check how much memory this process consumed.

def bytes_to_giga_bytes(bytes): return bytes / 1024 / 1024 / 1024 print(f"Max memory allocated: {bytes_to_giga_bytes(torch.cuda.max_memory_allocated())} GB") "Max memory allocated: 4.406213283538818 GB"

Now, reuse the same pipeline components from StableDiffusionPipeline in StableDiffusionSAGPipeline with the from_pipe() method.

Some pipeline methods may not function properly on new pipelines created with from_pipe(). For instance, the enable_model_cpu_offload() method installs hooks on the model components based on a unique offloading sequence for each pipeline. If the models are executed in a different order in the new pipeline, the CPU offloading may not work correctly.

To ensure everything works as expected, we recommend re-applying a pipeline method on a new pipeline created with from_pipe().

pipe_sag = StableDiffusionSAGPipeline.from_pipe( pipe_sd )

generator = torch.Generator(device="cpu").manual_seed(33) out_sag = pipe_sag( prompt="bear eats pizza", negative_prompt="wrong white balance, dark, sketches,worst quality,low quality", ip_adapter_image=image, num_inference_steps=50, generator=generator, guidance_scale=1.0, sag_scale=0.75 ).images[0] out_sag

If you check the memory usage, you’ll see it remains the same as before because StableDiffusionPipeline and StableDiffusionSAGPipeline are sharing the same pipeline components. This allows you to use them interchangeably without any additional memory overhead.

print(f"Max memory allocated: {bytes_to_giga_bytes(torch.cuda.max_memory_allocated())} GB") "Max memory allocated: 4.406213283538818 GB"

Let’s animate the image with the AnimateDiffPipeline and also add a MotionAdapter module to the pipeline. For the AnimateDiffPipeline, you need to unload the IP-Adapter first and reload it after you’ve created your new pipeline (this only applies to the AnimateDiffPipeline).

from diffusers import AnimateDiffPipeline, MotionAdapter, DDIMScheduler from diffusers.utils import export_to_gif

pipe_sag.unload_ip_adapter() adapter = MotionAdapter.from_pretrained("guoyww/animatediff-motion-adapter-v1-5-2", torch_dtype=torch.float16)

pipe_animate = AnimateDiffPipeline.from_pipe(pipe_sd, motion_adapter=adapter) pipe_animate.scheduler = DDIMScheduler.from_config(pipe_animate.scheduler.config, beta_schedule="linear")

pipe_animate.load_ip_adapter("h94/IP-Adapter", subfolder="models", weight_name="ip-adapter_sd15.bin") pipe_animate.load_lora_weights("guoyww/animatediff-motion-lora-zoom-out", adapter_name="zoom-out") pipe_animate.to("cuda")

generator = torch.Generator(device="cpu").manual_seed(33) pipe_animate.set_adapters("zoom-out", adapter_weights=0.75) out = pipe_animate( prompt="bear eats pizza", num_frames=16, num_inference_steps=50, ip_adapter_image=image, generator=generator, ).frames[0] export_to_gif(out, "out_animate.gif")

The AnimateDiffPipeline is more memory-intensive and consumes 15GB of memory (see the Memory-usage of from_pipe section to learn what this means for your memory-usage).

print(f"Max memory allocated: {bytes_to_giga_bytes(torch.cuda.max_memory_allocated())} GB") "Max memory allocated: 15.178664207458496 GB"

Modify from_pipe components

Pipelines loaded with from_pipe() can be customized with different model components or methods. However, whenever you modify the state of the model components, it affects all the other pipelines that share the same components. For example, if you call unload_ip_adapter() on the StableDiffusionSAGPipeline, you won’t be able to use IP-Adapter with the StableDiffusionPipeline because it’s been removed from their shared components.

pipe.sag_unload_ip_adapter()

generator = torch.Generator(device="cpu").manual_seed(33) out_sd = pipe_sd( prompt="bear eats pizza", negative_prompt="wrong white balance, dark, sketches,worst quality,low quality", ip_adapter_image=image, num_inference_steps=50, generator=generator, ).images[0] "AttributeError: 'NoneType' object has no attribute 'image_projection_layers'"

Memory usage of from_pipe

The memory requirement of loading multiple pipelines with from_pipe() is determined by the pipeline with the highest memory-usage regardless of the number of pipelines you create.

Pipeline Memory usage (GB)
StableDiffusionPipeline 4.400
StableDiffusionSAGPipeline 4.400
AnimateDiffPipeline 15.178

The AnimateDiffPipeline has the highest memory requirement, so the total memory-usage is based only on the AnimateDiffPipeline. Your memory-usage will not increase if you create additional pipelines as long as their memory requirements doesn’t exceed that of the AnimateDiffPipeline. Each pipeline can be used interchangeably without any additional memory overhead.

Safety checker

Diffusers implements a safety checker for Stable Diffusion models which can generate harmful content. The safety checker screens the generated output against known hardcoded not-safe-for-work (NSFW) content. If for whatever reason you’d like to disable the safety checker, pass safety_checker=None to the from_pretrained() method.

from diffusers import DiffusionPipeline

pipeline = DiffusionPipeline.from_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", safety_checker=None, use_safetensors=True) """ You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing safety_checker=None. Ensure that you abide by the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the public. Both the diffusers team and Hugging Face strongly recommend keeping the safety filter enabled in all public-facing circumstances, disabling it only for use cases that involve analyzing network behavior or auditing its results. For more information, please have a look at https://github.com/huggingface/diffusers/pull/254 . """

Checkpoint variants

A checkpoint variant is usually a checkpoint whose weights are:

When the checkpoints have identical model structures, but they were trained on different datasets and with a different training setup, they should be stored in separate repositories. For example, stabilityai/stable-diffusion-2 and stabilityai/stable-diffusion-2-1 are stored in separate repositories.

Otherwise, a variant is identical to the original checkpoint. They have exactly the same serialization format (like safetensors), model structure, and their weights have identical tensor shapes.

checkpoint type weight name argument for loading weights
original diffusion_pytorch_model.safetensors
floating point diffusion_pytorch_model.fp16.safetensors variant, torch_dtype
non-EMA diffusion_pytorch_model.non_ema.safetensors variant

There are two important arguments for loading variants:

from diffusers import DiffusionPipeline import torch

pipeline = DiffusionPipeline.from_pretrained( "stable-diffusion-v1-5/stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True )

Use the variant parameter in the DiffusionPipeline.save_pretrained() method to save a checkpoint as a different floating point type or as a non-EMA variant. You should try save a variant to the same folder as the original checkpoint, so you have the option of loading both from the same folder.

from diffusers import DiffusionPipeline

pipeline.save_pretrained("stable-diffusion-v1-5/stable-diffusion-v1-5", variant="fp16")

If you don’t save the variant to an existing folder, you must specify the variant argument otherwise it’ll throw an Exception because it can’t find the original checkpoint.

pipeline = DiffusionPipeline.from_pretrained( "./stable-diffusion-v1-5", torch_dtype=torch.float16, use_safetensors=True )

pipeline = DiffusionPipeline.from_pretrained( "./stable-diffusion-v1-5", variant="fp16", torch_dtype=torch.float16, use_safetensors=True )

DiffusionPipeline explained

As a class method, DiffusionPipeline.from_pretrained() is responsible for two things:

The pipelines’ underlying folder structure corresponds directly with their class instances. For example, the StableDiffusionPipeline corresponds to the folder structure in stable-diffusion-v1-5/stable-diffusion-v1-5.

from diffusers import DiffusionPipeline

repo_id = "stable-diffusion-v1-5/stable-diffusion-v1-5" pipeline = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) print(pipeline)

You’ll see pipeline is an instance of StableDiffusionPipeline, which consists of seven components:

StableDiffusionPipeline { "feature_extractor": [ "transformers", "CLIPImageProcessor" ], "safety_checker": [ "stable_diffusion", "StableDiffusionSafetyChecker" ], "scheduler": [ "diffusers", "PNDMScheduler" ], "text_encoder": [ "transformers", "CLIPTextModel" ], "tokenizer": [ "transformers", "CLIPTokenizer" ], "unet": [ "diffusers", "UNet2DConditionModel" ], "vae": [ "diffusers", "AutoencoderKL" ] }

Compare the components of the pipeline instance to the stable-diffusion-v1-5/stable-diffusion-v1-5 folder structure, and you’ll see there is a separate folder for each of the components in the repository:

. ├── feature_extractor │   └── preprocessor_config.json ├── model_index.json ├── safety_checker │   ├── config.json | ├── model.fp16.safetensors │ ├── model.safetensors │ ├── pytorch_model.bin | └── pytorch_model.fp16.bin ├── scheduler │   └── scheduler_config.json ├── text_encoder │   ├── config.json | ├── model.fp16.safetensors │ ├── model.safetensors │ |── pytorch_model.bin | └── pytorch_model.fp16.bin ├── tokenizer │   ├── merges.txt │   ├── special_tokens_map.json │   ├── tokenizer_config.json │   └── vocab.json ├── unet │   ├── config.json │   ├── diffusion_pytorch_model.bin | |── diffusion_pytorch_model.fp16.bin │ |── diffusion_pytorch_model.f16.safetensors │ |── diffusion_pytorch_model.non_ema.bin │ |── diffusion_pytorch_model.non_ema.safetensors │ └── diffusion_pytorch_model.safetensors |── vae . ├── config.json . ├── diffusion_pytorch_model.bin ├── diffusion_pytorch_model.fp16.bin ├── diffusion_pytorch_model.fp16.safetensors └── diffusion_pytorch_model.safetensors

You can access each of the components of the pipeline as an attribute to view its configuration:

pipeline.tokenizer CLIPTokenizer( name_or_path="/root/.cache/huggingface/hub/models--runwayml--stable-diffusion-v1-5/snapshots/39593d5650112b4cc580433f6b0435385882d819/tokenizer", vocab_size=49408, model_max_length=77, is_fast=False, padding_side="right", truncation_side="right", special_tokens={ "bos_token": AddedToken("<|startoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), "eos_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), "unk_token": AddedToken("<|endoftext|>", rstrip=False, lstrip=False, single_word=False, normalized=True), "pad_token": "<|endoftext|>", }, clean_up_tokenization_spaces=True )

Every pipeline expects a model_index.json file that tells the DiffusionPipeline:

{ "_class_name": "StableDiffusionPipeline", "_diffusers_version": "0.6.0", "feature_extractor": [ "transformers", "CLIPImageProcessor" ], "safety_checker": [ "stable_diffusion", "StableDiffusionSafetyChecker" ], "scheduler": [ "diffusers", "PNDMScheduler" ], "text_encoder": [ "transformers", "CLIPTextModel" ], "tokenizer": [ "transformers", "CLIPTokenizer" ], "unet": [ "diffusers", "UNet2DConditionModel" ], "vae": [ "diffusers", "AutoencoderKL" ] }

< > Update on GitHub