[Alpha-VLLM Team] Add Lumina-T2X to diffusers by PommesPeter · Pull Request #8652 · huggingface/diffusers (original) (raw)
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Conversation112 Commits103 Checks15 Files changed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters
[ Show hidden characters]({{ revealButtonHref }})
What does this PR do?
Add Lumina-T2X to diffusers
Fixes #8652
Before submitting
- This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- Did you read the contributor guideline?
- Did you read our philosophy doc (important for complex PRs)?
- Was this discussed/approved via a GitHub issue or the forum? Please add a link to it if that's the case.
- Did you make sure to update the documentation with your changes? Here are the
documentation guidelines, and
here are tips on formatting docstrings. - Did you write any new necessary tests?
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
PommesPeter changed the title
Add Lumina-T2X to diffusers [WIP] Add Lumina-T2X to diffusers
can you run make fix-copies
again?
can you run
make fix-copies
again?
run it~
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
@PommesPeter lumina tests still fail
I think we need to update the lumina tests now because we made updates to the model
@PommesPeter lumina tests still fail I think we need to update the lumina tests now because we made updates to the model
yep, we have fixed the problem from test class.
not sure what's the status of the pr since simple load is still failing?
pip install git+https://github.com/PommesPeter/diffusers@lumina
import torch from diffusers import LuminaText2ImgPipeline pipe = LuminaText2ImgPipeline.from_pretrained("Alpha-VLLM/Lumina-Next-SFT-diffusers", torch_dtype=torch.bfloat16).cuda()
ValueError: Cannot load <class 'diffusers.models.transformers.lumina_nextdit2d.LuminaNextDiT2DModel'> from /mnt/models/Diffusers/models--Alpha-VLLM--Lumina-Next-SFT-diffusers/snapshots/f82702c1b6a9bac3db9155edad1fd8dbf088cdf6/transformer because the following keys are missing:
...
not sure what's the status of the pr since simple load is still failing?
pip install git+https://github.com/PommesPeter/diffusers@lumina
import torch from diffusers import LuminaText2ImgPipeline pipe = LuminaText2ImgPipeline.from_pretrained("Alpha-VLLM/Lumina-Next-SFT-diffusers", torch_dtype=torch.bfloat16).cuda()
ValueError: Cannot load <class 'diffusers.models.transformers.lumina_nextdit2d.LuminaNextDiT2DModel'> from /mnt/models/Diffusers/models--Alpha-VLLM--Lumina-Next-SFT-diffusers/snapshots/f82702c1b6a9bac3db9155edad1fd8dbf088cdf6/transformer because the following keys are missing: ...
sorry, we have network problem in pushing our newest model to huggingface. I'm re-pushing the newest model for lumina.
not sure what's the status of the pr since simple load is still failing?
pip install git+https://github.com/PommesPeter/diffusers@lumina
import torch from diffusers import LuminaText2ImgPipeline pipe = LuminaText2ImgPipeline.from_pretrained("Alpha-VLLM/Lumina-Next-SFT-diffusers", torch_dtype=torch.bfloat16).cuda()
ValueError: Cannot load <class 'diffusers.models.transformers.lumina_nextdit2d.LuminaNextDiT2DModel'> from /mnt/models/Diffusers/models--Alpha-VLLM--Lumina-Next-SFT-diffusers/snapshots/f82702c1b6a9bac3db9155edad1fd8dbf088cdf6/transformer because the following keys are missing: ...
sorry, we have network problem in pushing our newest model to huggingface. I'm re-pushing the newest model for lumina.
Hi @vladmandic
we have pushed our model to huggingface repo. could you re-pull the huggingface model repo for testing you want?
@PommesPeter
can you check if you need to update the slow tests? since the checkpoints have been updated a couple of times
I will merge it tomorrow once the slow tests are updated
@PommesPeter can you check if you need to update the slow tests? since the checkpoints have been updated a couple of times I will merge it tomorrow once the slow tests are updated
okay, i will fix the problem.
wow! thank you for your reviewing to our pr
sayakpaul pushed a commit that referenced this pull request
Co-authored-by: zhuole1025 zhuole1025@gmail.com Co-authored-by: YiYi Xu yixu310@gmail.com