Latent upscaler (original) (raw)

The Stable Diffusion latent upscaler model was created by Katherine Crowson in collaboration with Stability AI. It is used to enhance the output image resolution by a factor of 2 (see this demo notebook for a demonstration of the original implementation).

Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!

If you’re interested in using one of the official checkpoints for a task, explore the CompVis, Runway, and Stability AI Hub organizations!

class diffusers.StableDiffusionLatentUpscalePipeline

< source >

( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: EulerDiscreteScheduler )

Parameters

Pipeline for upscaling Stable Diffusion output image resolution by a factor of 2.

This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).

The pipeline also inherits the following loading methods:

__call__

< source >

( prompt: typing.Union[str, typing.List[str]] = None image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None num_inference_steps: int = 75 guidance_scale: float = 9.0 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.Tensor] = None prompt_embeds: typing.Optional[torch.Tensor] = None negative_prompt_embeds: typing.Optional[torch.Tensor] = None pooled_prompt_embeds: typing.Optional[torch.Tensor] = None negative_pooled_prompt_embeds: typing.Optional[torch.Tensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Optional[typing.Callable[[int, int, torch.Tensor], NoneType]] = None callback_steps: int = 1 ) → StableDiffusionPipelineOutput or tuple

Parameters

If return_dict is True, StableDiffusionPipelineOutput is returned, otherwise a tuple is returned where the first element is a list with the generated images.

The call function to the pipeline for generation.

Examples:

from diffusers import StableDiffusionLatentUpscalePipeline, StableDiffusionPipeline import torch

pipeline = StableDiffusionPipeline.from_pretrained( ... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 ... ) pipeline.to("cuda")

model_id = "stabilityai/sd-x2-latent-upscaler" upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) upscaler.to("cuda")

prompt = "a photo of an astronaut high resolution, unreal engine, ultra realistic" generator = torch.manual_seed(33)

low_res_latents = pipeline(prompt, generator=generator, output_type="latent").images

with torch.no_grad(): ... image = pipeline.decode_latents(low_res_latents) image = pipeline.numpy_to_pil(image)[0]

image.save("../images/a1.png")

upscaled_image = upscaler( ... prompt=prompt, ... image=low_res_latents, ... num_inference_steps=20, ... guidance_scale=0, ... generator=generator, ... ).images[0]

upscaled_image.save("../images/a2.png")

enable_sequential_cpu_offload

< source >

( gpu_id: typing.Optional[int] = None device: typing.Union[torch.device, str] = 'cuda' )

Parameters

Offloads all models to CPU using 🤗 Accelerate, significantly reducing memory usage. When called, the state dicts of all torch.nn.Module components (except those in self._exclude_from_cpu_offload) are saved to CPU and then moved to torch.device('meta') and loaded to GPU only when their specific submodule has its forwardmethod called. Offloading happens on a submodule basis. Memory savings are higher than withenable_model_cpu_offload, but performance is lower.

enable_attention_slicing

< source >

( slice_size: typing.Union[int, str, NoneType] = 'auto' )

Parameters

Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in several steps. For more than one attention head, the computation is performed sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.

⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention (SDPA) from PyTorch 2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs!

Examples:

import torch from diffusers import StableDiffusionPipeline

pipe = StableDiffusionPipeline.from_pretrained( ... "stable-diffusion-v1-5/stable-diffusion-v1-5", ... torch_dtype=torch.float16, ... use_safetensors=True, ... )

prompt = "a photo of an astronaut riding a horse on mars" pipe.enable_attention_slicing() image = pipe(prompt).images[0]

Disable sliced attention computation. If enable_attention_slicing was previously called, attention is computed in one step.

enable_xformers_memory_efficient_attention

< source >

( attention_op: typing.Optional[typing.Callable] = None )

Parameters

Enable memory efficient attention from xFormers. When this option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed up during training is not guaranteed.

⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes precedent.

Examples:

import torch from diffusers import DiffusionPipeline from xformers.ops import MemoryEfficientAttentionFlashAttentionOp

pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16) pipe = pipe.to("cuda") pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)

pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)

disable_xformers_memory_efficient_attention

< source >

( )

Disable memory efficient attention from xFormers.

encode_prompt

< source >

( prompt device do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.Tensor] = None negative_prompt_embeds: typing.Optional[torch.Tensor] = None pooled_prompt_embeds: typing.Optional[torch.Tensor] = None negative_pooled_prompt_embeds: typing.Optional[torch.Tensor] = None )

Parameters

Encodes the prompt into text encoder hidden states.

class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput

< source >

( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] )

Parameters

Output class for Stable Diffusion pipelines.