fix norm not training in train_control_lora_flux.py by Luo-Yihang · Pull Request #11832 · huggingface/diffusers (original) (raw)
What does this PR do?
This PR fixes an issue in examples/flux-control/train_control_lora_flux.py where train_norm_layers (if set) were unintentionally disabled (requires_grad = False) after calling flux_transformer.add_adapter(transformer_lora_config). As a result, the normalization layers were not being trained as expected.
This PR fixes that.
@sayakpaul — Appreciate it if you could take a look.