Stable diffusion pipelines (original) (raw)

Stable Diffusion is a text-to-image latent diffusion model created by the researchers and engineers from CompVis, Stability AI and LAION. It’s trained on 512x512 images from a subset of the LAION-5B dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs.

Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in High-Resolution Image Synthesis with Latent Diffusion Models by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, BjΓΆrn Ommer. You can learn more details about it in the specific pipeline for latent diffusion that is part of πŸ€— Diffusers.

For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, please refer to the official launch announcement post and this section of our own blog post.

Tips:

Overview:

Pipeline Tasks Colab Demo
pipeline_stable_diffusion.py Text-to-Image Generation Open In Colab πŸ€— Stable Diffusion
pipeline_stable_diffusion_img2img.py Image-to-Image Text-Guided Generation Open In Colab πŸ€— Diffuse the Rest
pipeline_stable_diffusion_inpaint.py Experimental – Text-Guided Image Inpainting Open In Colab Coming soon

Tips

How to load and use different schedulers.

The stable diffusion pipeline uses PNDMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the stable diffusion pipeline such as DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. To use a different scheduler, you can pass the scheduler argument to from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following:

from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler

euler_scheduler = EulerDiscreteScheduler.from_config("CompVis/stable-diffusion-v1-4", subfolder="scheduler") pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler)

How to conver all use cases with multiple or single pipeline

If you want to use all possible use cases in a single DiffusionPipeline you can either:

from diffusers import ( ... StableDiffusionPipeline, ... StableDiffusionImg2ImgPipeline, ... StableDiffusionInpaintPipeline, ... )

img2text = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") img2img = StableDiffusionImg2ImgPipeline(**img2text.components) inpaint = StableDiffusionInpaintPipeline(**img2text.components)

StableDiffusionPipelineOutput

class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput

< source >

( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] )

Parameters

Output class for Stable Diffusion pipelines.

StableDiffusionPipeline

class diffusers.StableDiffusionPipeline

< source >

( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler] safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor )

Parameters

Pipeline for text-to-image generation using Stable Diffusion.

This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

__call__

< source >

( prompt: typing.Union[str, typing.List[str]] height: int = 512 width: int = 512 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Optional[torch._C.Generator] = None latents: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: typing.Optional[int] = 1 **kwargs ) β†’ StableDiffusionPipelineOutput or tuple

Parameters

StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`.

Function invoked when calling the pipeline for generation.

enable_attention_slicing

< source >

( slice_size: typing.Union[str, int, NoneType] = 'auto' )

Parameters

Enable sliced attention computation.

When this option is enabled, the attention module will split the input tensor in slices, to compute attention in several steps. This is useful to save some memory in exchange for a small speed decrease.

Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go back to computing attention in one step.

StableDiffusionImg2ImgPipeline

class diffusers.StableDiffusionImg2ImgPipeline

< source >

( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler] safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor )

Parameters

Pipeline for text-guided image to image generation using Stable Diffusion.

This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

__call__

< source >

( prompt: typing.Union[str, typing.List[str]] init_image: typing.Union[torch.FloatTensor, PIL.Image.Image] strength: float = 0.8 num_inference_steps: typing.Optional[int] = 50 guidance_scale: typing.Optional[float] = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: typing.Optional[float] = 0.0 generator: typing.Optional[torch._C.Generator] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: typing.Optional[int] = 1 **kwargs ) β†’ StableDiffusionPipelineOutput or tuple

Parameters

StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`.

Function invoked when calling the pipeline for generation.

enable_attention_slicing

< source >

( slice_size: typing.Union[str, int, NoneType] = 'auto' )

Parameters

Enable sliced attention computation.

When this option is enabled, the attention module will split the input tensor in slices, to compute attention in several steps. This is useful to save some memory in exchange for a small speed decrease.

Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go back to computing attention in one step.

StableDiffusionInpaintPipeline

class diffusers.StableDiffusionInpaintPipeline

< source >

( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler] safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor )

Parameters

Pipeline for text-guided image inpainting using Stable Diffusion. This is an experimental feature.

This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

__call__

< source >

( prompt: typing.Union[str, typing.List[str]] image: typing.Union[torch.FloatTensor, PIL.Image.Image] mask_image: typing.Union[torch.FloatTensor, PIL.Image.Image] height: int = 512 width: int = 512 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Optional[torch._C.Generator] = None latents: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: typing.Optional[int] = 1 **kwargs ) β†’ StableDiffusionPipelineOutput or tuple

Parameters

StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`.

Function invoked when calling the pipeline for generation.

enable_attention_slicing

< source >

( slice_size: typing.Union[str, int, NoneType] = 'auto' )

Parameters

Enable sliced attention computation.

When this option is enabled, the attention module will split the input tensor in slices, to compute attention in several steps. This is useful to save some memory in exchange for a small speed decrease.

Disable sliced attention computation. If enable_attention_slicing was previously invoked, this method will go back to computing attention in one step.