[LoRA] fix: lora unloading when using expanded Flux LoRAs. by sayakpaul · Pull Request #10397 · huggingface/diffusers (original) (raw)
What does this PR do?
#10206 added support for properly unloading Control LoRA weights from Flux and allowing the pipeline to be used with from_pipe().
However, this led to the following problem. Say, if we do:
... pipeline.load_lora_weights("black-forest-labs/FLUX.1-Canny-dev-lora") pipeline.fuse_lora() pipeline.unload_lora_weights() ...
This would revert the transformer config back to the original (in_channels=64), rendering this line to output 8 (whereas it should be 16). This is fine for use cases #10206 served but it's not effective for the above use case.
This went uncaught because we didn't have a proper test for it and I didn't run the integration tests for Control LoRA. Sorry about that.
This PR fixes this. To do that, I am adding a flag reset_to_overwritten_params (not fixated on the name) to the unload_lora_weights() of FluxLoraLoaderMixin and suggesting to add a note about it in the docs.