[Discussion] allow new merging methods from peft
· Issue #6892 · huggingface/diffusers (original) (raw)
We rely on peft
to supercharge our support doing all things LoRA: https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference.
Currently, it's possible to do multi-LoRA inference seamlessly using peft
:
https://huggingface.co/docs/diffusers/main/en/tutorials/using_peft_for_inference#combine-multiple-adapters
However, the limitation of this is the following.
- It currently supports only one merging method for combining different LoRAs.
- It doesn't benefit from newer and more advanced merging methods from the community, having the potential to improve generation quality even further.
G.O.A.T @pacman100 is adding new merging methods to peft
in huggingface/peft#1364, supporting TIES
, DARE_LINEAR
, etc.
Inspired by that, I quickly cooked up a Colab Notebook, that:
- Converts
diffusers
LoRA intoPeftModel
. Otherwise, we won't be able to do what's described in the next point. - Loads multiple such
PeftModel
LoRAs and combines them usingadd_weighted_adapter()
method frompeft
. The advantage of doing this thatadd_weighted_adapter()
comes supported with all the new merging methods right off the bat.
Check #6892 (comment) for more results.
Limitations
- The notebook shows a PoC where the LoRAs were only targeted for the UNet component of the diffusion pipeline.
- The entire process of getting the
PeftModel
s out seems like a big one.
Questions to discuss
- Are we okay having support for new merging methods like this way?
- Or should we expose a new argument in set_adapters() (
merge_method
) and add support for them withindiffusers
natively, possibly leveraging some utilities frompeft
?
Tagging everyone that might be interested to say their bits from the HF team:
@younesbelkada, @apolinario, @linoytsaban, @BenjaminBossan.
Folks from the community:
@okotaku, @asomoza, @fabiorigano.
Cc: @yiyixuxu for awareness.