feat: Add QwenImageEditPlus to support future feature upgrades by naykun · Pull Request #12357 · huggingface/diffusers (original) (raw)

Conversation

This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters

[ Show hidden characters]({{ revealButtonHref }})

@naykun

Introduces QwenImageEditPlusPipeline to support upcoming version upgrade.

cc @yiyixuxu @sayakpaul

@naykun

sayakpaul

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for working on this! My comments are minor in nature except for the comment on using autocast from VAE encoding.

We can actually also pass a dict to torch_dtype when initializing the pipeline like so:

torch_dtype_dict = {specified_key: torch.bfloat16, "default": torch.float16}
loaded_pipe = self.pipeline_class.from_pretrained(tmpdirname, torch_dtype=torch_dtype_dict)

LMK if anything is unclear.

def _encode_vae_image(self, image: torch.Tensor, generator: torch.Generator):
origin_dtype = image.dtype
with torch.autocast(image.device.type, torch.float32):

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm this is a bit of an anti-pattern. We avoid using autocast within pipeline code as much as possible. If the goal is to always ensure that the VAE encoding takes place in torch.float32, then we could instruct the user in doing so, like we do in
https://huggingface.co/docs/diffusers/main/en/api/pipelines/wan

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was wondering if we might consider making FP32 VAE encoding the default — just to help users who may not catch the note in the documentation.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@naykun

@naykun

Hi @sayakpaul,

Thank you for your review! I’ve updated the code to include the reuse statement as suggested — with the exception of two functions that behave differently by design.

Regarding the autocast issue: if we use torch_dtype_dict during checkpoint saving, would that allow users to encode images in FP32 transparently?

@sayakpaul

Regarding the autocast issue: if we use torch_dtype_dict during checkpoint saving, would that allow users to encode images in FP32 transparently?

If we save using save_pretrained() with torch_dtype as a dict, it should work. However, to enforce it to the users in a transparent way, we could first check if the vae.dtype is in float32 inside the encode_vae_image() function and then raise an error/warning educating the user about torch_dtype dict thing. WDYT?

@yiyixuxu

yiyixuxu

@naykun

@naykun

@sayakpaul @yiyixuxu
Thank you both for your thoughtful feedback. After re-evaluating the trade-offs — particularly the benefits of autocast versus potential impacts on efficiency and compatibility with the existing ecosystem — we’ve decided not to enforce this change at this time. By default, we’ll continue using bf16 as the encoding method.

All other concerns have been addressed. Please let me know if you have any further suggestions or changes you’d like us to consider.

sayakpaul

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All good from my end. I think we can also add an entry about this in the docs?

I can also help add a test in a separate PR if you want.

Off to @yiyixuxu for the final approval.

# QwenImage latents are turned into 2x2 patches and packed. This means the latent width and height has to be divisible
# by the patch size. So the vae scale factor is multiplied by the patch size to account for this
self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor * 2)
self.vl_processor = processor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(nit): we're not using vl_processor anywhere in the code.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice catch 🤗

@naykun

@naykun

All good from my end. I think we can also add an entry about this in the docs?

I can also help add a test in a separate PR if you want.

Thank you! That would be so great 😊