Flux2 (original) (raw)

LoRA MPS

Flux.2 is the recent series of image generation models from Black Forest Labs, preceded by the Flux.1 series. It is an entirely new model with a new architecture and pre-training done from scratch!

Original model checkpoints for Flux can be found here. Original inference code can be found here.

Flux2 can be quite expensive to run on consumer hardware devices. However, you can perform a suite of optimizations to run it faster and in a more memory-friendly manner. Check out this section for more details. Additionally, Flux can benefit from quantization for memory efficiency with a trade-off in inference latency. Refer to this blog post to learn more.

Caching may also speed up inference by storing and reusing intermediate outputs.

Caption upsampling

Flux.2 can potentially generate better better outputs with better prompts. We can “upsample” an input prompt by setting the caption_upsample_temperature argument in the pipeline call arguments. The official implementation recommends this value to be 0.15.

Flux2Pipeline

class diffusers.Flux2Pipeline

< source >

( scheduler: FlowMatchEulerDiscreteScheduler vae: AutoencoderKLFlux2 text_encoder: Mistral3ForConditionalGeneration tokenizer: AutoProcessor transformer: Flux2Transformer2DModel )

Parameters

The Flux2 pipeline for text-to-image generation.

Reference: https://bfl.ai/blog/flux-2

__call__

< source >

( image: typing.Union[typing.List[PIL.Image.Image], PIL.Image.Image, NoneType] = None prompt: typing.Union[str, typing.List[str]] = None height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 sigmas: typing.Optional[typing.List[float]] = None guidance_scale: typing.Optional[float] = 4.0 num_images_per_prompt: int = 1 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.Tensor] = None prompt_embeds: typing.Optional[torch.Tensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None callback_on_step_end: typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None callback_on_step_end_tensor_inputs: typing.List[str] = ['latents'] max_sequence_length: int = 512 text_encoder_out_layers: typing.Tuple[int] = (10, 20, 30) caption_upsample_temperature: float = None ) → ~pipelines.flux2.Flux2PipelineOutput or tuple

Parameters

Returns

~pipelines.flux2.Flux2PipelineOutput or tuple

~pipelines.flux2.Flux2PipelineOutput ifreturn_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images.

Function invoked when calling the pipeline for generation.

Examples:

import torch from diffusers import Flux2Pipeline

pipe = Flux2Pipeline.from_pretrained("black-forest-labs/FLUX.2-dev", torch_dtype=torch.bfloat16) pipe.to("cuda") prompt = "A cat holding a sign that says hello world"

image = pipe(prompt, num_inference_steps=50, guidance_scale=2.5).images[0] image.save("flux.png")

Update on GitHub