VisualCloze (original) (raw)

VisualCloze: A Universal Image Generation Framework via Visual In-Context Learning is an innovative in-context learning based universal image generation framework that offers key capabilities:

  1. Support for various in-domain tasks
  2. Generalization to unseen tasks through in-context learning
  3. Unify multiple tasks into one step and generate both target image and intermediate results
  4. Support reverse-engineering conditions from target images

Overview

The abstract from the paper is:

Recent progress in diffusion models significantly advances various image generation tasks. However, the current mainstream approach remains focused on building task-specific models, which have limited efficiency when supporting a wide range of different needs. While universal models attempt to address this limitation, they face critical challenges, including generalizable task instruction, appropriate task distributions, and unified architectural design. To tackle these challenges, we propose VisualCloze, a universal image generation framework, which supports a wide range of in-domain tasks, generalization to unseen ones, unseen unification of multiple tasks, and reverse generation. Unlike existing methods that rely on language-based task instruction, leading to task ambiguity and weak generalization, we integrate visual in-context learning, allowing models to identify tasks from visual demonstrations. Meanwhile, the inherent sparsity of visual task distributions hampers the learning of transferable knowledge across tasks. To this end, we introduce Graph200K, a graph-structured dataset that establishes various interrelated tasks, enhancing task density and transferable knowledge. Furthermore, we uncover that our unified image generation formulation shared a consistent objective with image infilling, enabling us to leverage the strong generative priors of pre-trained infilling models without modifying the architectures. The codes, dataset, and models are available at https://visualcloze.github.io.

Inference

Model loading

VisualCloze is a two-stage cascade pipeline, containing VisualClozeGenerationPipeline and VisualClozeUpsamplingPipeline.

The VisualClozePipeline integrates both stages to support convenient end-to-end sampling, while also allowing users to utilize each pipeline independently as needed.

Input Specifications

Task and Content Prompts

Image Input Format

Resolution Control

Examples

For comprehensive examples covering a wide range of tasks, please refer to the Online Demo and GitHub Repository. Below are simple examples for three cases: mask-to-image conversion, edge detection, and subject-driven generation.

Example for mask2image

import torch from diffusers import VisualClozePipeline from diffusers.utils import load_image

pipe = VisualClozePipeline.from_pretrained("VisualCloze/VisualClozePipeline-384", resolution=384, torch_dtype=torch.bfloat16) pipe.to("cuda")

image_paths = [

[
    load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_incontext-example-1_mask.jpg'),
    load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_incontext-example-1_image.jpg'),
],

[
    load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_query_mask.jpg'),
    None, 
],

]

task_prompt = "In each row, a logical task is demonstrated to achieve [IMAGE2] an aesthetically pleasing photograph based on [IMAGE1] sam 2-generated masks with rich color coding." content_prompt = """Majestic photo of a golden eagle perched on a rocky outcrop in a mountainous landscape. The eagle is positioned in the right foreground, facing left, with its sharp beak and keen eyes prominently visible. Its plumage is a mix of dark brown and golden hues, with intricate feather details. The background features a soft-focus view of snow-capped mountains under a cloudy sky, creating a serene and grandiose atmosphere. The foreground includes rugged rocks and patches of green moss. Photorealistic, medium depth of field, soft natural lighting, cool color palette, high contrast, sharp focus on the eagle, blurred background, tranquil, majestic, wildlife photography."""

image_result = pipe( task_prompt=task_prompt, content_prompt=content_prompt, image=image_paths, upsampling_width=1344, upsampling_height=768, upsampling_strength=0.4, guidance_scale=30, num_inference_steps=30, max_sequence_length=512, generator=torch.Generator("cpu").manual_seed(0) ).images[0][0]

image_result.save("visualcloze.png")

Example for edge-detection

import torch from diffusers import VisualClozePipeline from diffusers.utils import load_image

pipe = VisualClozePipeline.from_pretrained("VisualCloze/VisualClozePipeline-384", resolution=384, torch_dtype=torch.bfloat16) pipe.to("cuda")

image_paths = [

[
    load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_edgedetection_incontext-example-1_image.jpg'),
    load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_edgedetection_incontext-example-1_edge.jpg'),
],
[
    load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_edgedetection_incontext-example-2_image.jpg'),
    load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_edgedetection_incontext-example-2_edge.jpg'),
],

[
    load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_edgedetection_query_image.jpg'),
    None, 
],

]

task_prompt = "Each row illustrates a pathway from [IMAGE1] a sharp and beautifully composed photograph to [IMAGE2] edge map with natural well-connected outlines using a clear logical task." content_prompt = ""

image_result = pipe( task_prompt=task_prompt, content_prompt=content_prompt, image=image_paths, upsampling_width=864, upsampling_height=1152, upsampling_strength=0.4, guidance_scale=30, num_inference_steps=30, max_sequence_length=512, generator=torch.Generator("cpu").manual_seed(0) ).images[0][0]

image_result.save("visualcloze.png")

Example for subject-driven generation

import torch from diffusers import VisualClozePipeline from diffusers.utils import load_image

pipe = VisualClozePipeline.from_pretrained("VisualCloze/VisualClozePipeline-384", resolution=384, torch_dtype=torch.bfloat16) pipe.to("cuda")

image_paths = [

[
    load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_subjectdriven_incontext-example-1_reference.jpg'),
    load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_subjectdriven_incontext-example-1_depth.jpg'),
    load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_subjectdriven_incontext-example-1_image.jpg'),
],
[
    load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_subjectdriven_incontext-example-2_reference.jpg'),
    load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_subjectdriven_incontext-example-2_depth.jpg'),
    load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_subjectdriven_incontext-example-2_image.jpg'),
],

[
    load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_subjectdriven_query_reference.jpg'),
    load_image('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_subjectdriven_query_depth.jpg'),
    None, 
],

]

task_prompt = """Each row describes a process that begins with [IMAGE1] an image containing the key object, [IMAGE2] depth map revealing gray-toned spatial layers and results in [IMAGE3] an image with artistic qualitya high-quality image with exceptional detail.""" content_prompt = """A vintage porcelain collector's item. Beneath a blossoming cherry tree in early spring, this treasure is photographed up close, with soft pink petals drifting through the air and vibrant blossoms framing the scene."""

image_result = pipe( task_prompt=task_prompt, content_prompt=content_prompt, image=image_paths, upsampling_width=1024, upsampling_height=1024, upsampling_strength=0.2, guidance_scale=30, num_inference_steps=30, max_sequence_length=512, generator=torch.Generator("cpu").manual_seed(0) ).images[0][0]

image_result.save("visualcloze.png")

Utilize each pipeline independently

import torch from diffusers import VisualClozeGenerationPipeline, FluxFillPipeline as VisualClozeUpsamplingPipeline from diffusers.utils import load_image from PIL import Image

pipe = VisualClozeGenerationPipeline.from_pretrained( "VisualCloze/VisualClozePipeline-384", resolution=384, torch_dtype=torch.bfloat16 ) pipe.to("cuda")

image_paths = [

[
    load_image(
        "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_incontext-example-1_mask.jpg"
    ),
    load_image(
        "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_incontext-example-1_image.jpg"
    ),
],

[
    load_image(
        "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_query_mask.jpg"
    ),
    None,  
],

] task_prompt = "In each row, a logical task is demonstrated to achieve [IMAGE2] an aesthetically pleasing photograph based on [IMAGE1] sam 2-generated masks with rich color coding." content_prompt = "Majestic photo of a golden eagle perched on a rocky outcrop in a mountainous landscape. The eagle is positioned in the right foreground, facing left, with its sharp beak and keen eyes prominently visible. Its plumage is a mix of dark brown and golden hues, with intricate feather details. The background features a soft-focus view of snow-capped mountains under a cloudy sky, creating a serene and grandiose atmosphere. The foreground includes rugged rocks and patches of green moss. Photorealistic, medium depth of field, soft natural lighting, cool color palette, high contrast, sharp focus on the eagle, blurred background, tranquil, majestic, wildlife photography."

image = pipe( task_prompt=task_prompt, content_prompt=content_prompt, image=image_paths, guidance_scale=30, num_inference_steps=30, max_sequence_length=512, generator=torch.Generator("cpu").manual_seed(0), ).images[0][0]

pipe_upsample = VisualClozeUpsamplingPipeline.from_pipe(pipe) pipe_upsample.to("cuda")

mask_image = Image.new("RGB", image.size, (255, 255, 255))

image = pipe_upsample( image=image, mask_image=mask_image, prompt=content_prompt, width=1344, height=768, strength=0.4, guidance_scale=30, num_inference_steps=30, max_sequence_length=512, generator=torch.Generator("cpu").manual_seed(0), ).images[0]

image.save("visualcloze.png")

VisualClozePipeline

class diffusers.VisualClozePipeline

< source >

( scheduler: FlowMatchEulerDiscreteScheduler vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer text_encoder_2: T5EncoderModel tokenizer_2: T5TokenizerFast transformer: FluxTransformer2DModel resolution: int = 384 )

Parameters

The VisualCloze pipeline for image generation with visual context. Reference:https://github.com/lzyhha/VisualCloze/tree/main. This pipeline is designed to generate images based on visual in-context examples.

__call__

< source >

( task_prompt: typing.Union[str, typing.List[str]] = None content_prompt: typing.Union[str, typing.List[str]] = None image: typing.Optional[torch.FloatTensor] = None upsampling_height: typing.Optional[int] = None upsampling_width: typing.Optional[int] = None num_inference_steps: int = 50 sigmas: typing.Optional[typing.List[float]] = None guidance_scale: float = 30.0 num_images_per_prompt: typing.Optional[int] = 1 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None pooled_prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True joint_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None callback_on_step_end: typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None callback_on_step_end_tensor_inputs: typing.List[str] = ['latents'] max_sequence_length: int = 512 upsampling_strength: float = 1.0 ) → ~pipelines.flux.FluxPipelineOutput or tuple

Parameters

Returns

~pipelines.flux.FluxPipelineOutput or tuple

~pipelines.flux.FluxPipelineOutput if return_dictis True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images.

Function invoked when calling the VisualCloze pipeline for generation.

Examples:

import torch from diffusers import VisualClozePipeline from diffusers.utils import load_image

image_paths = [ ...
... [ ... load_image( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_incontext-example-1_mask.jpg" ... ), ... load_image( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_incontext-example-1_image.jpg" ... ), ... ], ...
... [ ... load_image( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_query_mask.jpg" ... ), ... None,
... ], ... ] task_prompt = "In each row, a logical task is demonstrated to achieve [IMAGE2] an aesthetically pleasing photograph based on [IMAGE1] sam 2-generated masks with rich color coding." content_prompt = "Majestic photo of a golden eagle perched on a rocky outcrop in a mountainous landscape. The eagle is positioned in the right foreground, facing left, with its sharp beak and keen eyes prominently visible. Its plumage is a mix of dark brown and golden hues, with intricate feather details. The background features a soft-focus view of snow-capped mountains under a cloudy sky, creating a serene and grandiose atmosphere. The foreground includes rugged rocks and patches of green moss. Photorealistic, medium depth of field, soft natural lighting, cool color palette, high contrast, sharp focus on the eagle, blurred background, tranquil, majestic, wildlife photography." pipe = VisualClozePipeline.from_pretrained( ... "VisualCloze/VisualClozePipeline-384", resolution=384, torch_dtype=torch.bfloat16 ... ) pipe.to("cuda")

image = pipe( ... task_prompt=task_prompt, ... content_prompt=content_prompt, ... image=image_paths, ... upsampling_width=1344, ... upsampling_height=768, ... upsampling_strength=0.4, ... guidance_scale=30, ... num_inference_steps=30, ... max_sequence_length=512, ... generator=torch.Generator("cpu").manual_seed(0), ... ).images[0][0] image.save("visualcloze.png")

VisualClozeGenerationPipeline

class diffusers.VisualClozeGenerationPipeline

< source >

( scheduler: FlowMatchEulerDiscreteScheduler vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer text_encoder_2: T5EncoderModel tokenizer_2: T5TokenizerFast transformer: FluxTransformer2DModel resolution: int = 384 )

Parameters

The VisualCloze pipeline for image generation with visual context. Reference:https://github.com/lzyhha/VisualCloze/tree/main This pipeline is designed to generate images based on visual in-context examples.

__call__

< source >

( task_prompt: typing.Union[str, typing.List[str]] = None content_prompt: typing.Union[str, typing.List[str]] = None image: typing.Optional[torch.FloatTensor] = None num_inference_steps: int = 50 sigmas: typing.Optional[typing.List[float]] = None guidance_scale: float = 30.0 num_images_per_prompt: typing.Optional[int] = 1 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None pooled_prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True joint_attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None callback_on_step_end: typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None callback_on_step_end_tensor_inputs: typing.List[str] = ['latents'] max_sequence_length: int = 512 ) → ~pipelines.flux.FluxPipelineOutput or tuple

Parameters

Returns

~pipelines.flux.FluxPipelineOutput or tuple

~pipelines.flux.FluxPipelineOutput if return_dictis True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images.

Function invoked when calling the VisualCloze pipeline for generation.

Examples:

import torch from diffusers import VisualClozeGenerationPipeline, FluxFillPipeline as VisualClozeUpsamplingPipeline from diffusers.utils import load_image from PIL import Image

image_paths = [ ...
... [ ... load_image( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_incontext-example-1_mask.jpg" ... ), ... load_image( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_incontext-example-1_image.jpg" ... ), ... ], ...
... [ ... load_image( ... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/visualcloze/visualcloze_mask2image_query_mask.jpg" ... ), ... None,
... ], ... ] task_prompt = "In each row, a logical task is demonstrated to achieve [IMAGE2] an aesthetically pleasing photograph based on [IMAGE1] sam 2-generated masks with rich color coding." content_prompt = "Majestic photo of a golden eagle perched on a rocky outcrop in a mountainous landscape. The eagle is positioned in the right foreground, facing left, with its sharp beak and keen eyes prominently visible. Its plumage is a mix of dark brown and golden hues, with intricate feather details. The background features a soft-focus view of snow-capped mountains under a cloudy sky, creating a serene and grandiose atmosphere. The foreground includes rugged rocks and patches of green moss. Photorealistic, medium depth of field, soft natural lighting, cool color palette, high contrast, sharp focus on the eagle, blurred background, tranquil, majestic, wildlife photography." pipe = VisualClozeGenerationPipeline.from_pretrained( ... "VisualCloze/VisualClozePipeline-384", resolution=384, torch_dtype=torch.bfloat16 ... ) pipe.to("cuda")

image = pipe( ... task_prompt=task_prompt, ... content_prompt=content_prompt, ... image=image_paths, ... guidance_scale=30, ... num_inference_steps=30, ... max_sequence_length=512, ... generator=torch.Generator("cpu").manual_seed(0), ... ).images[0][0]

pipe_upsample = VisualClozeUpsamplingPipeline.from_pipe(pipe) pipe_upsample.to("cuda")

mask_image = Image.new("RGB", image.size, (255, 255, 255))

image = pipe_upsample( ... image=image, ... mask_image=mask_image, ... prompt=content_prompt, ... width=1344, ... height=768, ... strength=0.4, ... guidance_scale=30, ... num_inference_steps=30, ... max_sequence_length=512, ... generator=torch.Generator("cpu").manual_seed(0), ... ).images[0]

image.save("visualcloze.png")

Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to computing decoding in one step.

Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to computing decoding in one step.

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.

encode_prompt

< source >

( layout_prompt: typing.Union[str, typing.List[str]] task_prompt: typing.Union[str, typing.List[str]] content_prompt: typing.Union[str, typing.List[str]] device: typing.Optional[torch.device] = None num_images_per_prompt: int = 1 prompt_embeds: typing.Optional[torch.FloatTensor] = None pooled_prompt_embeds: typing.Optional[torch.FloatTensor] = None max_sequence_length: int = 512 lora_scale: typing.Optional[float] = None )

Parameters

Update on GitHub