Bria fibo by galbria · Pull Request #12545 · huggingface/diffusers (original) (raw)
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters
[ Show hidden characters]({{ revealButtonHref }})
What does this PR do?
Fixes # (issue)
Before submitting
- This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
- Did you read the contributor guideline?
- Did you read our philosophy doc (important for complex PRs)?
- Was this discussed/approved via a GitHub issue or the forum? Please add a link to it if that's the case.
- Did you make sure to update the documentation with your changes? Here are the
documentation guidelines, and
here are tips on formatting docstrings. - Did you write any new necessary tests?
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for the PR. Excited for FIBO to make strides!
I have left a bunch of comments, most of which should be easily resolvable. If not, please let me know.
Additionally, I think:
- It'd be nice to include a code snippet for folks to test it out (@linoytsaban @asomoza).
- Remove the custom block implementations from the PR, host them on the Hub (just like this one), and guide the users about how to use them alongside the pipeline.
| output_height, output_width, _ = image.shape |
|---|
| assert (output_height, output_width) == (expected_height, expected_width) |
| @unittest.skipIf(torch_device not in ["cuda", "xpu"], reason="float16 requires CUDA or XPU") |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can remove this test I guess. If not, would you mind explaining why we had to override it here?
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we used it to debug something, its redundant and removed
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like the test is still being kept here?
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a few more comments. I think we should let the users know that they should absolutely use the structured prompt in the docs.
| output_height, output_width, _ = image.shape |
|---|
| assert (output_height, output_width) == (expected_height, expected_width) |
| @unittest.skipIf(torch_device not in ["cuda", "xpu"], reason="float16 requires CUDA or XPU") |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like the test is still being kept here?
- Updated BriaFiboAttnProcessor and BriaFiboAttention classes to reflect changes from Flux equivalents.
- Modified the _unpack_latents method in BriaFiboPipeline to improve clarity.
- Increased the default max_sequence_length to 3000 and added a new optional parameter do_patching.
- Cleaned up test_pipeline_bria_fibo.py by removing unused imports and skipping unsupported tests.
Comment on lines 740 to 741
| latents = latents.unsqueeze(dim=2) |
|---|
| latents = list(torch.unbind(latents, dim=0)) |
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kfirbria Hmm that's unusual. Is the input shape to the decoder in this format (batch_size, channels, 1, height, width)?
- Updated class names from FIBO to BriaFibo for consistency across the module.
- Modified instances of FIBOEmbedND, FIBOTimesteps, TextProjection, and TimestepProjEmbeddings to reflect the new naming.
- Ensured all references in the BriaFiboTransformer2DModel are updated accordingly.
…oTransformer2DModel and BriaFiboPipeline classes to dummy objects for enhanced compatibility with torch and transformers.
… in pipeline module
- Added documentation comments indicating the source of copied code in BriaFiboTransformerBlock and _pack_latents methods.
- Corrected the import statement for BriaFiboPipeline in the pipelines module.
…ration from existing implementations
- Updated comments in BriaFiboAttnProcessor, BriaFiboAttention, and BriaFiboPipeline to reflect that the code is inspired by other modules rather than copied.
- Enhanced clarity on the origins of the methods to maintain proper attribution.
…riaFibo classes
- Introduced a new documentation file for BriaFiboTransformer2DModel.
- Updated comments in BriaFiboAttnProcessor, BriaFiboAttention, and BriaFiboPipeline to clarify the origins of the code, indicating copied sources for better attribution.
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot!
DN6 approved these changes Oct 28, 2025