[Alpha-VLLM Team] Add Lumina-T2X to diffusers by PommesPeter · Pull Request #8652 · huggingface/diffusers (original) (raw)

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Conversation112 Commits103 Checks15 Files changed

Conversation

This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters

[ Show hidden characters]({{ revealButtonHref }})

PommesPeter

What does this PR do?

Add Lumina-T2X to diffusers

Fixes #8652

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@PommesPeter PommesPeter changed the titleAdd Lumina-T2X to diffusers [WIP] Add Lumina-T2X to diffusers

Jun 20, 2024

yiyixuxu

@yiyixuxu

can you run make fix-copies again?

@PommesPeter

@PommesPeter

can you run make fix-copies again?

run it~

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@yiyixuxu

@PommesPeter lumina tests still fail
I think we need to update the lumina tests now because we made updates to the model

@PommesPeter

@PommesPeter

@PommesPeter lumina tests still fail I think we need to update the lumina tests now because we made updates to the model

yep, we have fixed the problem from test class.

@yiyixuxu

@PommesPeter

yiyixuxu

@PommesPeter

yiyixuxu

@yiyixuxu

@vladmandic

not sure what's the status of the pr since simple load is still failing?

pip install git+https://github.com/PommesPeter/diffusers@lumina

import torch from diffusers import LuminaText2ImgPipeline pipe = LuminaText2ImgPipeline.from_pretrained("Alpha-VLLM/Lumina-Next-SFT-diffusers", torch_dtype=torch.bfloat16).cuda()

ValueError: Cannot load <class 'diffusers.models.transformers.lumina_nextdit2d.LuminaNextDiT2DModel'> from /mnt/models/Diffusers/models--Alpha-VLLM--Lumina-Next-SFT-diffusers/snapshots/f82702c1b6a9bac3db9155edad1fd8dbf088cdf6/transformer because the following keys are missing:
...

@PommesPeter

not sure what's the status of the pr since simple load is still failing?

pip install git+https://github.com/PommesPeter/diffusers@lumina

import torch from diffusers import LuminaText2ImgPipeline pipe = LuminaText2ImgPipeline.from_pretrained("Alpha-VLLM/Lumina-Next-SFT-diffusers", torch_dtype=torch.bfloat16).cuda()

ValueError: Cannot load <class 'diffusers.models.transformers.lumina_nextdit2d.LuminaNextDiT2DModel'> from /mnt/models/Diffusers/models--Alpha-VLLM--Lumina-Next-SFT-diffusers/snapshots/f82702c1b6a9bac3db9155edad1fd8dbf088cdf6/transformer because the following keys are missing:
...

sorry, we have network problem in pushing our newest model to huggingface. I'm re-pushing the newest model for lumina.

@PommesPeter

not sure what's the status of the pr since simple load is still failing?

pip install git+https://github.com/PommesPeter/diffusers@lumina

import torch from diffusers import LuminaText2ImgPipeline pipe = LuminaText2ImgPipeline.from_pretrained("Alpha-VLLM/Lumina-Next-SFT-diffusers", torch_dtype=torch.bfloat16).cuda()

ValueError: Cannot load <class 'diffusers.models.transformers.lumina_nextdit2d.LuminaNextDiT2DModel'> from /mnt/models/Diffusers/models--Alpha-VLLM--Lumina-Next-SFT-diffusers/snapshots/f82702c1b6a9bac3db9155edad1fd8dbf088cdf6/transformer because the following keys are missing:
...

sorry, we have network problem in pushing our newest model to huggingface. I'm re-pushing the newest model for lumina.

Hi @vladmandic
we have pushed our model to huggingface repo. could you re-pull the huggingface model repo for testing you want?

@yiyixuxu

@yiyixuxu

@yiyixuxu

@PommesPeter
can you check if you need to update the slow tests? since the checkpoints have been updated a couple of times
I will merge it tomorrow once the slow tests are updated

@PommesPeter

@PommesPeter can you check if you need to update the slow tests? since the checkpoints have been updated a couple of times I will merge it tomorrow once the slow tests are updated

okay, i will fix the problem.

@vladmandic

yiyixuxu

@PommesPeter

@yiyixuxu

@yiyixuxu

@PommesPeter

wow! thank you for your reviewing to our pr

sayakpaul pushed a commit that referenced this pull request

Dec 23, 2024


Co-authored-by: zhuole1025 zhuole1025@gmail.com Co-authored-by: YiYi Xu yixu310@gmail.com