ControlNetUnion (original) (raw)
class diffusers.StableDiffusionXLControlNetUnionImg2ImgPipeline
( vae: AutoencoderKL text_encoder: CLIPTextModel text_encoder_2: CLIPTextModelWithProjection tokenizer: CLIPTokenizer tokenizer_2: CLIPTokenizer unet: UNet2DConditionModel controlnet: ControlNetUnionModel scheduler: KarrasDiffusionSchedulers requires_aesthetics_score: bool = False force_zeros_for_empty_prompt: bool = True add_watermarker: typing.Optional[bool] = None feature_extractor: CLIPImageProcessor = None image_encoder: CLIPVisionModelWithProjection = None )
Parameters
- vae (AutoencoderKL) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
- text_encoder (
CLIPTextModel
) — Frozen text-encoder. Stable Diffusion uses the text portion ofCLIP, specifically the clip-vit-large-patch14 variant. - text_encoder_2 (
CLIPTextModelWithProjection
) — Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion ofCLIP, specifically thelaion/CLIP-ViT-bigG-14-laion2B-39B-b160kvariant. - tokenizer (
CLIPTokenizer
) — Tokenizer of classCLIPTokenizer. - tokenizer_2 (
CLIPTokenizer
) — Second Tokenizer of classCLIPTokenizer. - unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents.
- controlnet (ControlNetUnionModel) — Provides additional conditioning to the unet during the denoising process.
- scheduler (SchedulerMixin) — A scheduler to be used in combination with
unet
to denoise the encoded image latents. Can be one ofDDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. - requires_aesthetics_score (
bool
, optional, defaults to"False"
) — Whether theunet
requires anaesthetic_score
condition to be passed during inference. Also see the config ofstabilityai/stable-diffusion-xl-refiner-1-0
. - force_zeros_for_empty_prompt (
bool
, optional, defaults to"True"
) — Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config ofstabilityai/stable-diffusion-xl-base-1-0
. - add_watermarker (
bool
, optional) — Whether to use the invisible_watermark library to watermark output images. If not defined, it will default to True if the package is installed, otherwise no watermarker will be used. - feature_extractor (CLIPImageProcessor) — A
CLIPImageProcessor
to extract features from generated images; used as inputs to thesafety_checker
.
Pipeline for image-to-image generation using Stable Diffusion XL with ControlNet guidance.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
__call__
( prompt: typing.Union[str, typing.List[str]] = None prompt_2: typing.Union[str, typing.List[str], NoneType] = None image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None control_image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None height: typing.Optional[int] = None width: typing.Optional[int] = None strength: float = 0.8 num_inference_steps: int = 50 guidance_scale: float = 5.0 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None negative_prompt_2: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.Tensor] = None prompt_embeds: typing.Optional[torch.Tensor] = None negative_prompt_embeds: typing.Optional[torch.Tensor] = None pooled_prompt_embeds: typing.Optional[torch.Tensor] = None negative_pooled_prompt_embeds: typing.Optional[torch.Tensor] = None ip_adapter_image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None ip_adapter_image_embeds: typing.Optional[typing.List[torch.Tensor]] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True cross_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None controlnet_conditioning_scale: typing.Union[float, typing.List[float]] = 0.8 guess_mode: bool = False control_guidance_start: typing.Union[float, typing.List[float]] = 0.0 control_guidance_end: typing.Union[float, typing.List[float]] = 1.0 control_mode: typing.Union[int, typing.List[int], NoneType] = None original_size: typing.Tuple[int, int] = None crops_coords_top_left: typing.Tuple[int, int] = (0, 0) target_size: typing.Tuple[int, int] = None negative_original_size: typing.Optional[typing.Tuple[int, int]] = None negative_crops_coords_top_left: typing.Tuple[int, int] = (0, 0) negative_target_size: typing.Optional[typing.Tuple[int, int]] = None aesthetic_score: float = 6.0 negative_aesthetic_score: float = 2.5 clip_skip: typing.Optional[int] = None callback_on_step_end: typing.Union[typing.Callable[[int, int, typing.Dict], NoneType], diffusers.callbacks.PipelineCallback, diffusers.callbacks.MultiPipelineCallbacks, NoneType] = None callback_on_step_end_tensor_inputs: typing.List[str] = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple
Parameters
- prompt (
str
orList[str]
, optional) — The prompt or prompts to guide the image generation. If not defined, one has to passprompt_embeds
. instead. - prompt_2 (
str
orList[str]
, optional) — The prompt or prompts to be sent to thetokenizer_2
andtext_encoder_2
. If not defined,prompt
is used in both text-encoders - image (
torch.Tensor
,PIL.Image.Image
,np.ndarray
,List[torch.Tensor]
,List[PIL.Image.Image]
,List[np.ndarray]
, —List[List[torch.Tensor]]
,List[List[np.ndarray]]
orList[List[PIL.Image.Image]]
): The initial image will be used as the starting point for the image generation process. Can also accept image latents asimage
, if passing latents directly, it will not be encoded again. - control_image (
PipelineImageInput
) — The ControlNet input condition. ControlNet uses this input condition to generate guidance to Unet. If the type is specified astorch.Tensor
, it is passed to ControlNet as is.PIL.Image.Image
can also be accepted as an image. The dimensions of the output image defaults toimage
’s dimensions. If height and/or width are passed,image
is resized according to them. If multiple ControlNets are specified in init, images must be passed as a list such that each element of the list can be correctly batched for input to a single controlnet. - height (
int
, optional, defaults to the size of control_image) — The height in pixels of the generated image. Anything below 512 pixels won’t work well forstabilityai/stable-diffusion-xl-base-1.0and checkpoints that are not specifically fine-tuned on low resolutions. - width (
int
, optional, defaults to the size of control_image) — The width in pixels of the generated image. Anything below 512 pixels won’t work well forstabilityai/stable-diffusion-xl-base-1.0and checkpoints that are not specifically fine-tuned on low resolutions. - strength (
float
, optional, defaults to 0.8) — Indicates extent to transform the referenceimage
. Must be between 0 and 1.image
is used as a starting point and more noise is added the higher thestrength
. The number of denoising steps depends on the amount of noise initially added. Whenstrength
is 1, added noise is maximum and the denoising process runs for the full number of iterations specified innum_inference_steps
. A value of 1 essentially ignoresimage
. - num_inference_steps (
int
, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. - guidance_scale (
float
, optional, defaults to 7.5) — Guidance scale as defined in Classifier-Free Diffusion Guidance.guidance_scale
is defined asw
of equation 2. of Imagen Paper. Guidance scale is enabled by settingguidance_scale > 1
. Higher guidance scale encourages to generate images that are closely linked to the textprompt
, usually at the expense of lower image quality. - negative_prompt (
str
orList[str]
, optional) — The prompt or prompts not to guide the image generation. If not defined, one has to passnegative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored ifguidance_scale
is less than1
). - negative_prompt_2 (
str
orList[str]
, optional) — The prompt or prompts not to guide the image generation to be sent totokenizer_2
andtext_encoder_2
. If not defined,negative_prompt
is used in both text-encoders - num_images_per_prompt (
int
, optional, defaults to 1) — The number of images to generate per prompt. - eta (
float
, optional, defaults to 0.0) — Corresponds to parameter eta (η) in the DDIM paper: https://huggingface.co/papers/2010.02502. Only applies to schedulers.DDIMScheduler, will be ignored for others. - generator (
torch.Generator
orList[torch.Generator]
, optional) — One or a list of torch generator(s)to make generation deterministic. - latents (
torch.Tensor
, optional) — Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied randomgenerator
. - prompt_embeds (
torch.Tensor
, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated fromprompt
input argument. - negative_prompt_embeds (
torch.Tensor
, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated fromnegative_prompt
input argument. - pooled_prompt_embeds (
torch.Tensor
, optional) — Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, pooled text embeddings will be generated fromprompt
input argument. - negative_pooled_prompt_embeds (
torch.Tensor
, optional) — Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, pooled negative_prompt_embeds will be generated fromnegative_prompt
input argument. - ip_adapter_image — (
PipelineImageInput
, optional): Optional image input to work with IP Adapters. - ip_adapter_image_embeds (
List[torch.Tensor]
, optional) — Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. Each element should be a tensor of shape(batch_size, num_images, emb_dim)
. It should contain the negative image embedding ifdo_classifier_free_guidance
is set toTrue
. If not provided, embeddings are computed from theip_adapter_image
input argument. - output_type (
str
, optional, defaults to"pil"
) — The output format of the generate image. Choose betweenPIL:PIL.Image.Image
ornp.array
. - return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return a StableDiffusionPipelineOutput instead of a plain tuple. - cross_attention_kwargs (
dict
, optional) — A kwargs dictionary that if specified is passed along to theAttentionProcessor
as defined underself.processor
indiffusers.models.attention_processor. - controlnet_conditioning_scale (
float
orList[float]
, optional, defaults to 1.0) — The outputs of the controlnet are multiplied bycontrolnet_conditioning_scale
before they are added to the residual in the original unet. If multiple ControlNets are specified in init, you can set the corresponding scale as a list. - guess_mode (
bool
, optional, defaults toFalse
) — In this mode, the ControlNet encoder will try best to recognize the content of the input image even if you remove all prompts. Theguidance_scale
between 3.0 and 5.0 is recommended. - control_guidance_start (
float
orList[float]
, optional, defaults to 0.0) — The percentage of total steps at which the controlnet starts applying. - control_guidance_end (
float
orList[float]
, optional, defaults to 1.0) — The percentage of total steps at which the controlnet stops applying. - original_size (
Tuple[int]
, optional, defaults to (1024, 1024)) — Iforiginal_size
is not the same astarget_size
the image will appear to be down- or upsampled.original_size
defaults to(height, width)
if not specified. Part of SDXL’s micro-conditioning as explained in section 2.2 ofhttps://huggingface.co/papers/2307.01952. - crops_coords_top_left (
Tuple[int]
, optional, defaults to (0, 0)) —crops_coords_top_left
can be used to generate an image that appears to be “cropped” from the positioncrops_coords_top_left
downwards. Favorable, well-centered images are usually achieved by settingcrops_coords_top_left
to (0, 0). Part of SDXL’s micro-conditioning as explained in section 2.2 ofhttps://huggingface.co/papers/2307.01952. - target_size (
Tuple[int]
, optional, defaults to (1024, 1024)) — For most cases,target_size
should be set to the desired height and width of the generated image. If not specified it will default to(height, width)
. Part of SDXL’s micro-conditioning as explained in section 2.2 of https://huggingface.co/papers/2307.01952. - negative_original_size (
Tuple[int]
, optional, defaults to (1024, 1024)) — To negatively condition the generation process based on a specific image resolution. Part of SDXL’s micro-conditioning as explained in section 2.2 ofhttps://huggingface.co/papers/2307.01952. For more information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. - negative_crops_coords_top_left (
Tuple[int]
, optional, defaults to (0, 0)) — To negatively condition the generation process based on a specific crop coordinates. Part of SDXL’s micro-conditioning as explained in section 2.2 ofhttps://huggingface.co/papers/2307.01952. For more information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. - negative_target_size (
Tuple[int]
, optional, defaults to (1024, 1024)) — To negatively condition the generation process based on a target image resolution. It should be as same as thetarget_size
for most cases. Part of SDXL’s micro-conditioning as explained in section 2.2 ofhttps://huggingface.co/papers/2307.01952. For more information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208. - aesthetic_score (
float
, optional, defaults to 6.0) — Used to simulate an aesthetic score of the generated image by influencing the positive text condition. Part of SDXL’s micro-conditioning as explained in section 2.2 ofhttps://huggingface.co/papers/2307.01952. - negative_aesthetic_score (
float
, optional, defaults to 2.5) — Part of SDXL’s micro-conditioning as explained in section 2.2 ofhttps://huggingface.co/papers/2307.01952. Can be used to simulate an aesthetic score of the generated image by influencing the negative text condition. - clip_skip (
int
, optional) — Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. - callback_on_step_end (
Callable
,PipelineCallback
,MultiPipelineCallbacks
, optional) — A function or a subclass ofPipelineCallback
orMultiPipelineCallbacks
that is called at the end of each denoising step during the inference. with the following arguments:callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)
.callback_kwargs
will include a list of all tensors as specified bycallback_on_step_end_tensor_inputs
. - callback_on_step_end_tensor_inputs (
List
, optional) — The list of tensor inputs for thecallback_on_step_end
function. The tensors specified in the list will be passed ascallback_kwargs
argument. You will only be able to include variables listed in the._callback_tensor_inputs
attribute of your pipeline class.
StableDiffusionPipelineOutput if return_dict
is True, otherwise a tuple
containing the output images.
Function invoked when calling the pipeline for generation.
Examples:
from diffusers import ( StableDiffusionXLControlNetUnionImg2ImgPipeline, ControlNetUnionModel, AutoencoderKL, ) from diffusers.utils import load_image import torch from PIL import Image import numpy as np
prompt = "A cat"
image = load_image( "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/kandinsky/cat.png" )
controlnet = ControlNetUnionModel.from_pretrained( "brad-twinkl/controlnet-union-sdxl-1.0-promax", torch_dtype=torch.float16 ) vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16) pipe = StableDiffusionXLControlNetUnionImg2ImgPipeline.from_pretrained( "stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16, variant="fp16", ).to("cuda")
height = image.height width = image.width ratio = np.sqrt(1024.0 * 1024.0 / (width * height))
scale_image_factor = 3 base_factor = 16 factor = scale_image_factor * base_factor W, H = int(width * ratio) // factor * factor, int(height * ratio) // factor * factor image = image.resize((W, H)) target_width = W // scale_image_factor target_height = H // scale_image_factor images = [] crops_coords_list = [ (0, 0), (0, width // 2), (height // 2, 0), (width // 2, height // 2), 0, 0, 0, 0, 0, ] for i in range(scale_image_factor): for j in range(scale_image_factor): left = j * target_width top = i * target_height right = left + target_width bottom = top + target_height cropped_image = image.crop((left, top, right, bottom)) cropped_image = cropped_image.resize((W, H)) images.append(cropped_image)
result_images = [] for sub_img, crops_coords in zip(images, crops_coords_list): new_width, new_height = W, H out = pipe( prompt=[prompt] * 1, image=sub_img, control_image=[sub_img], control_mode=[6], width=new_width, height=new_height, num_inference_steps=30, crops_coords_top_left=(W, H), target_size=(W, H), original_size=(W * 2, H * 2), ) result_images.append(out.images[0]) new_im = Image.new("RGB", (new_width * scale_image_factor, new_height * scale_image_factor)) new_im.paste(result_images[0], (0, 0)) new_im.paste(result_images[1], (new_width, 0)) new_im.paste(result_images[2], (new_width * 2, 0)) new_im.paste(result_images[3], (0, new_height)) new_im.paste(result_images[4], (new_width, new_height)) new_im.paste(result_images[5], (new_width * 2, new_height)) new_im.paste(result_images[6], (0, new_height * 2)) new_im.paste(result_images[7], (new_width, new_height * 2)) new_im.paste(result_images[8], (new_width * 2, new_height * 2))
encode_prompt
( prompt: str prompt_2: typing.Optional[str] = None device: typing.Optional[torch.device] = None num_images_per_prompt: int = 1 do_classifier_free_guidance: bool = True negative_prompt: typing.Optional[str] = None negative_prompt_2: typing.Optional[str] = None prompt_embeds: typing.Optional[torch.Tensor] = None negative_prompt_embeds: typing.Optional[torch.Tensor] = None pooled_prompt_embeds: typing.Optional[torch.Tensor] = None negative_pooled_prompt_embeds: typing.Optional[torch.Tensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None )
Parameters
- prompt (
str
orList[str]
, optional) — prompt to be encoded - prompt_2 (
str
orList[str]
, optional) — The prompt or prompts to be sent to thetokenizer_2
andtext_encoder_2
. If not defined,prompt
is used in both text-encoders - device — (
torch.device
): torch device - num_images_per_prompt (
int
) — number of images that should be generated per prompt - do_classifier_free_guidance (
bool
) — whether to use classifier free guidance or not - negative_prompt (
str
orList[str]
, optional) — The prompt or prompts not to guide the image generation. If not defined, one has to passnegative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored ifguidance_scale
is less than1
). - negative_prompt_2 (
str
orList[str]
, optional) — The prompt or prompts not to guide the image generation to be sent totokenizer_2
andtext_encoder_2
. If not defined,negative_prompt
is used in both text-encoders - prompt_embeds (
torch.Tensor
, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated fromprompt
input argument. - negative_prompt_embeds (
torch.Tensor
, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated fromnegative_prompt
input argument. - pooled_prompt_embeds (
torch.Tensor
, optional) — Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, pooled text embeddings will be generated fromprompt
input argument. - negative_pooled_prompt_embeds (
torch.Tensor
, optional) — Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, pooled negative_prompt_embeds will be generated fromnegative_prompt
input argument. - lora_scale (
float
, optional) — A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. - clip_skip (
int
, optional) — Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings.
Encodes the prompt into text encoder hidden states.