ControlNet with Flux.1 (original) (raw)

LoRA

FluxControlNetPipeline is an implementation of ControlNet for Flux.1.

ControlNet was introduced in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Anyi Rao, and Maneesh Agrawala.

With a ControlNet model, you can provide an additional control image to condition and control Stable Diffusion generation. For example, if you provide a depth map, the ControlNet model generates an image that’ll preserve the spatial information from the depth map. It is a more flexible and accurate way to control the image generation process.

The abstract from the paper is:

We present ControlNet, a neural network architecture to add spatial conditioning controls to large, pretrained text-to-image diffusion models. ControlNet locks the production-ready large diffusion models, and reuses their deep and robust encoding layers pretrained with billions of images as a strong backbone to learn a diverse set of conditional controls. The neural architecture is connected with “zero convolutions” (zero-initialized convolution layers) that progressively grow the parameters from zero and ensure that no harmful noise could affect the finetuning. We test various conditioning controls, eg, edges, depth, segmentation, human pose, etc, with Stable Diffusion, using single or multiple conditions, with or without prompts. We show that the training of ControlNets is robust with small (<50k) and large (>1m) datasets. Extensive results show that ControlNet may facilitate wider applications to control image diffusion models.

This controlnet code is implemented by The InstantX Team. You can find pre-trained checkpoints for Flux-ControlNet in the table below:

ControlNet type Developer Link
Canny The InstantX Team Link
Depth The InstantX Team Link
Union The InstantX Team Link

XLabs ControlNets are also supported, which was contributed by the XLabs team.

ControlNet type Developer Link
Canny The XLabs Team Link
Depth The XLabs Team Link
HED The XLabs Team Link

Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.

FluxControlNetPipeline

class diffusers.FluxControlNetPipeline

< source >

( scheduler: FlowMatchEulerDiscreteScheduler vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer text_encoder_2: T5EncoderModel tokenizer_2: T5TokenizerFast transformer: FluxTransformer2DModel controlnet: typing.Union[diffusers.models.controlnets.controlnet_flux.FluxControlNetModel, typing.List[diffusers.models.controlnets.controlnet_flux.FluxControlNetModel], typing.Tuple[diffusers.models.controlnets.controlnet_flux.FluxControlNetModel], diffusers.models.controlnets.controlnet_flux.FluxMultiControlNetModel] image_encoder: CLIPVisionModelWithProjection = None feature_extractor: CLIPImageProcessor = None )

Parameters

The Flux pipeline for text-to-image generation.

Reference: https://blackforestlabs.ai/announcing-black-forest-labs/

__call__

< source >

( prompt: typing.Union[str, typing.List[str]] = None prompt_2: typing.Union[str, typing.List[str], NoneType] = None negative_prompt: typing.Union[str, typing.List[str]] = None negative_prompt_2: typing.Union[str, typing.List[str], NoneType] = None true_cfg_scale: float = 1.0 height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 28 sigmas: typing.Optional[typing.List[float]] = None guidance_scale: float = 7.0 control_guidance_start: typing.Union[float, typing.List[float]] = 0.0 control_guidance_end: typing.Union[float, typing.List[float]] = 1.0 control_image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor]] = None control_mode: typing.Union[int, typing.List[int], NoneType] = None controlnet_conditioning_scale: typing.Union[float, typing.List[float]] = 1.0 num_images_per_prompt: typing.Optional[int] = 1 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None pooled_prompt_embeds: typing.Optional[torch.FloatTensor] = None ip_adapter_image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None ip_adapter_image_embeds: typing.Optional[typing.List[torch.Tensor]] = None negative_ip_adapter_image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor, typing.List[PIL.Image.Image], typing.List[numpy.ndarray], typing.List[torch.Tensor], NoneType] = None negative_ip_adapter_image_embeds: typing.Optional[typing.List[torch.Tensor]] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_pooled_prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True joint_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None callback_on_step_end: typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None callback_on_step_end_tensor_inputs: typing.List[str] = ['latents'] max_sequence_length: int = 512 ) → ~pipelines.flux.FluxPipelineOutput or tuple

Parameters

Returns

~pipelines.flux.FluxPipelineOutput or tuple

~pipelines.flux.FluxPipelineOutput if return_dictis True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images.

Function invoked when calling the pipeline for generation.

Examples:

import torch from diffusers.utils import load_image from diffusers import FluxControlNetPipeline from diffusers import FluxControlNetModel

base_model = "black-forest-labs/FLUX.1-dev" controlnet_model = "InstantX/FLUX.1-dev-controlnet-canny" controlnet = FluxControlNetModel.from_pretrained(controlnet_model, torch_dtype=torch.bfloat16) pipe = FluxControlNetPipeline.from_pretrained( ... base_model, controlnet=controlnet, torch_dtype=torch.bfloat16 ... ) pipe.to("cuda") control_image = load_image("https://huggingface.co/InstantX/SD3-Controlnet-Canny/resolve/main/canny.jpg") prompt = "A girl in city, 25 years old, cool, futuristic" image = pipe( ... prompt, ... control_image=control_image, ... control_guidance_start=0.2, ... control_guidance_end=0.8, ... controlnet_conditioning_scale=1.0, ... num_inference_steps=28, ... guidance_scale=3.5, ... ).images[0] image.save("flux.png")

encode_prompt

< source >

( prompt: typing.Union[str, typing.List[str]] prompt_2: typing.Union[str, typing.List[str]] device: typing.Optional[torch.device] = None num_images_per_prompt: int = 1 prompt_embeds: typing.Optional[torch.FloatTensor] = None pooled_prompt_embeds: typing.Optional[torch.FloatTensor] = None max_sequence_length: int = 512 lora_scale: typing.Optional[float] = None )

Parameters

FluxPipelineOutput

class diffusers.pipelines.flux.pipeline_output.FluxPipelineOutput

< source >

( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] )

Parameters

Output class for Stable Diffusion pipelines.

< > Update on GitHub