HiDreamImage (original) (raw)

HiDream-I1 by HiDream.ai

Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.

Available models

The following models are available for the HiDreamImagePipeline pipeline:

Model name Description
HiDream-ai/HiDream-I1-Full -
HiDream-ai/HiDream-I1-Dev -
HiDream-ai/HiDream-I1-Fast -

HiDreamImagePipeline

class diffusers.HiDreamImagePipeline

< source >

( scheduler: FlowMatchEulerDiscreteScheduler vae: AutoencoderKL text_encoder: CLIPTextModelWithProjection tokenizer: CLIPTokenizer text_encoder_2: CLIPTextModelWithProjection tokenizer_2: CLIPTokenizer text_encoder_3: T5EncoderModel tokenizer_3: T5Tokenizer text_encoder_4: LlamaForCausalLM tokenizer_4: PreTrainedTokenizerFast transformer: HiDreamImageTransformer2DModel )

__call__

< source >

( prompt: typing.Union[str, typing.List[str]] = None prompt_2: typing.Union[str, typing.List[str], NoneType] = None prompt_3: typing.Union[str, typing.List[str], NoneType] = None prompt_4: typing.Union[str, typing.List[str], NoneType] = None height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 sigmas: typing.Optional[typing.List[float]] = None guidance_scale: float = 5.0 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None negative_prompt_2: typing.Union[str, typing.List[str], NoneType] = None negative_prompt_3: typing.Union[str, typing.List[str], NoneType] = None negative_prompt_4: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds_t5: typing.Optional[torch.FloatTensor] = None prompt_embeds_llama3: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds_t5: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds_llama3: typing.Optional[torch.FloatTensor] = None pooled_prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_pooled_prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True attention_kwargs: typing.Optional[typing.Dict[str, typing.Any]] = None callback_on_step_end: typing.Optional[typing.Callable[[int, int, typing.Dict], NoneType]] = None callback_on_step_end_tensor_inputs: typing.List[str] = ['latents'] max_sequence_length: int = 128 **kwargs ) → ~pipelines.hidream_image.HiDreamImagePipelineOutput or tuple

Parameters

Returns

~pipelines.hidream_image.HiDreamImagePipelineOutput or tuple

~pipelines.hidream_image.HiDreamImagePipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated. images.

Function invoked when calling the pipeline for generation.

Examples:

import torch from transformers import PreTrainedTokenizerFast, LlamaForCausalLM from diffusers import UniPCMultistepScheduler, HiDreamImagePipeline

tokenizer_4 = PreTrainedTokenizerFast.from_pretrained("meta-llama/Meta-Llama-3.1-8B-Instruct") text_encoder_4 = LlamaForCausalLM.from_pretrained( ... "meta-llama/Meta-Llama-3.1-8B-Instruct", ... output_hidden_states=True, ... output_attentions=True, ... torch_dtype=torch.bfloat16, ... )

pipe = HiDreamImagePipeline.from_pretrained( ... "HiDream-ai/HiDream-I1-Full", ... tokenizer_4=tokenizer_4, ... text_encoder_4=text_encoder_4, ... torch_dtype=torch.bfloat16, ... ) pipe.enable_model_cpu_offload()

image = pipe( ... 'A cat holding a sign that says "Hi-Dreams.ai".', ... height=1024, ... width=1024, ... guidance_scale=5.0, ... num_inference_steps=50, ... generator=torch.Generator("cuda").manual_seed(0), ... ).images[0] image.save("output.png")

Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to computing decoding in one step.

Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to computing decoding in one step.

Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.

Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow processing larger images.

HiDreamImagePipelineOutput

class diffusers.pipelines.hidream_image.pipeline_output.HiDreamImagePipelineOutput

< source >

( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] )

Parameters

Output class for HiDreamImage pipelines.

< > Update on GitHub