[training] fixes to the quantization training script and add AdEMAMix optimizer as an option by sayakpaul · Pull Request #9806 · huggingface/diffusers (original) (raw)

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Conversation13 Commits4 Checks11 Files changed

Conversation

This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters

[ Show hidden characters]({{ revealButtonHref }})

sayakpaul

@sayakpaul

@sayakpaul

sayakpaul

@@ -1059,7 +1076,7 @@ def get_sigmas(timesteps, n_dim=4, dtype=torch.float32):
)
# handle guidance
if transformer.config.guidance_embeds:
if unwrap_model(transformer).config.guidance_embeds:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So that things are compatible with DeepSpeed.

sayakpaul

Comment on lines +1040 to 1047

vae_scale_factor = 2 ** (len(vae_config_block_out_channels) - 1)
latent_image_ids = FluxPipeline._prepare_latent_image_ids(
model_input.shape[0],
model_input.shape[2],
model_input.shape[3],
model_input.shape[2] // 2,
model_input.shape[3] // 2,
accelerator.device,
weight_dtype,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Follows what we do in the Flux LoRA scripts.

sayakpaul

Comment on lines +1102 to +1103

height=model_input.shape[2] * vae_scale_factor,
width=model_input.shape[3] * vae_scale_factor,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as above.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

linoytsaban

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cool! did you get a chance to play with AdEMAMix? should we consider adding it to the other scripts as well?

linoytsaban

@sayakpaul

Cool! did you get a chance to play with AdEMAMix? should we consider adding it to the other scripts as well?

Testing the memory requirement as we speak. Will report back.

linoytsaban

else:
optimizer_class = bnb.optim.AdEMAMix
optimizer = optimizer_class(params_to_optimize)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we support betas and weight_decay here?
We could use the existing args like we did for prodigy, i.e.

optimizer = optimizer_class(params_to_optimize,
betas=(args.adam_beta1, args.adam_beta2),
weight_decay=args.adam_weight_decay)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Umm I didn't want to actually to keep the separations of concern very clear. We could maybe revisit if the community finds the optimizer worth the go?

@sayakpaul

@sayakpaul

@sayakpaul

@linoytsaban

a-r-r-o-w pushed a commit that referenced this pull request

Nov 1, 2024

@sayakpaul @a-r-r-o-w

… optimizer as an option (#9806)

sayakpaul added a commit that referenced this pull request

Dec 23, 2024

@sayakpaul

… optimizer as an option (#9806)