Kandinsky 5 is finally in Diffusers! by leffff · Pull Request #12478 · huggingface/diffusers (original) (raw)

Conversation

This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters

[ Show hidden characters]({{ revealButtonHref }})

@leffff

What does this PR do?

This PR adds Kandinsky5T2VPipeline and Kandinsky5Transformer3DModel as well as several layer classes neede for Kandinsky 5.0 Lite T2V model

@sayakpaul Please review

kisnikser, tolgacangoz, DefTruth, leffff, asomoza, gertelrina, triflt, daniil-dushenev, maksimlitvinov39kg, HapKoM, and madstuntman11 reacted with hooray emoji kisnikser, MeiYi-dev, leffff, DefTruth, linoytsaban, SandyHelly, sayakpaul, asomoza, maksimlitvinov39kg, J4BEZ, and 6 more reacted with heart emoji kisnikser, MeiYi-dev, DefTruth, leffff, linoytsaban, asomoza, gertelrina, triflt, iasudakov, daniil-dushenev, and maksimlitvinov39kg reacted with rocket emoji

@sayakpaul

Could you please update the PR with test code and some example outputs?

@leffff

@leffff

@leffff

@leffff

@leffff

@leffff

import torch from diffusers import Kandinsky5T2VPipeline from diffusers.utils import export_to_video

pipe = Kandinsky5T2VPipeline.from_pretrained( "ai-forever/Kandinsky-5.0-T2V-Lite-sft-5s-Diffusers", torch_dtype=torch.bfloat16 ) pipe = pipe.to("cuda")

negative_prompt = [ "Static, 2D cartoon, cartoon, 2d animation, paintings, images, worst quality, low quality, ugly, deformed, walking backwards", ] prompt = [ "A cat and a dog baking a cake together in a kitchen.", ]

output = pipe( prompt=prompt, negative_prompt=negative_prompt, height=512, width=768, num_frames=121, num_inference_steps=50, guidance_scale=5.0, num_videos_per_prompt=1, generator=torch.Generator(42) )

output.10.mp4

prompt = [ "A monkey ridign a skateboard", ]

output.10.mp4

prompt = [ "Several giant wooly mammoths threading through the meadow", ]

output.10.mp4

@sayakpaul

@leffff

Yes of course!

A stylish woman struts confidently down a rain-drenched Tokyo street, where vibrant neon signs flicker and pulse with electric color. She wears a sleek black leather jacket over a flowing red dress, paired with polished black boots and a matching black purse. Her sunglasses reflect the glowing cityscape as she moves with a calm, assured demeanor, red lipstick adding a bold contrast to her look. The wet pavement mirrors the dazzling lights, doubling the intensity of the urban glow around her. Pedestrians bustle along the sidewalks, their silhouettes blending into the dynamic, cinematic atmosphere of the neon-lit metropolis.

output.10.mp4

A cinematic movie trailer unfolds with a 30-year-old space man traversing a vast salt desert beneath a brilliant blue sky. He wears a uniquely styled red wool knitted motorcycle helmet, adding an eccentric yet rugged charm to his spacefaring look. As he rides a retro-futuristic vehicle across the shimmering white terrain, the wind kicks up clouds of glittering salt, creating a surreal atmosphere. The scene is captured in a vivid, cinematic style, shot on 35mm film to enhance the nostalgic and dramatic grain. Explosions of color and dynamic camera movements highlight the space man's daring escape from a collapsing alien base in the distance.

output.11.mp4

@leffff

asomoza

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks, looks cool! left some suggestions for unused imports

@leffff @asomoza

Co-authored-by: Álvaro Somoza asomoza@users.noreply.github.com

@leffff @yiyixuxu

Co-authored-by: YiYi Xu yixu310@gmail.com

@leffff @yiyixuxu

Co-authored-by: YiYi Xu yixu310@gmail.com

@leffff @yiyixuxu

Co-authored-by: YiYi Xu yixu310@gmail.com

@leffff @yiyixuxu

Co-authored-by: YiYi Xu yixu310@gmail.com

@leffff @yiyixuxu

Co-authored-by: YiYi Xu yixu310@gmail.com

@leffff @yiyixuxu

Co-authored-by: YiYi Xu yixu310@gmail.com

@leffff @yiyixuxu

Co-authored-by: YiYi Xu yixu310@gmail.com

@leffff

@leffff

@yiyixuxu
Done! All your fixes are added! Ready to merge!

@asomoza

@leffff just want to let you know that I've been testing the 10s model and I'm really impressed with it, I like it a lot, congrats to the team. Can't wait for when you release the I2V one.

kangaroo.mp4

@leffff

@asomoza Great! Gonna add them in the next iteration!

@leffff

@yiyixuxu

@yiyixuxu @cbensimon

Co-authored-by: Charles charles@huggingface.co

@yiyixuxu

yiyixuxu

yiyixuxu

@yiyixuxu

yiyixuxu

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will merge once CI is green!

@yiyixuxu

@leffff

@yiyixuxu

@leffff
look forward to the follow-up PR for the 10s model!
We are very happy to help too - let me know if you need anything :)

@leffff

@sayakpaul

@leffff

@leffff

Just commenting to note that we support all kinds of different attention backends now. So, as long as we implement the attention class in this way, for example, swapping a backend from SDPA ("native" in our terminology) to "flex" should be very easy.

model.set_attention_backend("flex")

Yes, you are right. I tried doing

pipe.transformer.set_attention_backend("flex")

and it almost worked. You see, when I made separate processors, I did this:

class Kandinsky5NablaAttentionProcessor(nn.Module): """Custom attention processor for Nabla attention"""

@torch.compile(mode="max-autotune-no-cudagraphs", dynamic=True)
def __call__(
    self,
    attn,
    query,
    key,
    value,
    sparse_params=None,
    **kwargs,
):
    if sparse_params is None:
        raise ValueError("sparse_params is required for Nabla attention")

    query = query.transpose(1, 2).contiguous()
    key = key.transpose(1, 2).contiguous()
    value = value.transpose(1, 2).contiguous()

    block_mask = nablaT_v2(
        query,
        key,
        sparse_params["sta_mask"],
        thr=sparse_params["P"],
    )
    out = (
        flex_attention(query, key, value, block_mask=block_mask)
        .transpose(1, 2)
        .contiguous()
    )
    out = out.flatten(-2, -1)
    return out

@sayakpaul

and it almost worked.

What do you mean? It didn't work as expected or are we good? 👀

@leffff

It worked as expected, yet it's not everything. Flex requires additional compilation. Please see #12520

@sayakpaul