fix xlabs FLUX lora conversion typo by Clement-Lelievre · Pull Request #9581 · huggingface/diffusers (original) (raw)
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Conversation9 Commits2 Checks15 Files changed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters
[ Show hidden characters]({{ revealButtonHref }})
What does this PR do?
Typo fixing for XLab FLUX LoRA conversion to diffusers.
In its current version, this raises: TypeError: slice indices must be integers or None or have an __index__ method
because str.startswith
expects a tuple of strs or a single str.
(Unless I've missed an issue on this topic, I'm a bit surprised this typo has been here for over a month without anyone complaining)
Repro steps:
- instanciate a flux pipeline
- get a xlab lora safetensors
- run
pipe.load_lora_weights(xlab_lora)
, passing the lora from step 2
Before submitting
- This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- Did you read the contributor guideline?
- Did you read our philosophy doc (important for complex PRs)?
- Was this discussed/approved via a GitHub issue or the forum? Please add a link to it if that's the case.
- Did you make sure to update the documentation with your changes? Here are the
documentation guidelines, and
here are tips on formatting docstrings. - Did you write any new necessary tests?
Who can review?
Clement-Lelievre changed the title
fix xlabs lora conversion typo fix xlabs FLUX lora conversion typo
Thank you for this. The reason why, it was not brought up because we don't test for an Xlabs LoRA that has single_block
components:
self.pipeline.load_lora_weights("XLabs-AI/flux-lora-collection", weight_name="disney_lora.safetensors") |
---|
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
@sayakpaul from a cursory glance from my phone it seems the failed test is unrelated to the diff of this PR
Failing test is completely unrelated.
@sayakpaul as follow-up, using the same xlabs lora the conversion now works, however I now get the error: Adapter name(s) {'<my_adapter_name>'} not in the list of present adapters: set().
raised here since this PR
Possibly because get_list_adapters
fails to add a component?
Can you open a new issue with a reproducible snippet?
leisuzz pushed a commit to leisuzz/diffusers that referenced this pull request
sayakpaul pushed a commit that referenced this pull request
fix startswith syntax in xlabs lora conversion
Trigger CI