Dance Diffusion (original) (raw)

Dance Diffusion is by Zach Evans.

Dance Diffusion is the first in a suite of generative audio tools for producers and musicians released by Harmonai.

Make sure to check out the Schedulers guide to learn how to explore the tradeoff between scheduler speed and quality, and see the reuse components across pipelines section to learn how to efficiently load the same components into multiple pipelines.

DanceDiffusionPipeline

class diffusers.DanceDiffusionPipeline

< source >

( unet scheduler )

Parameters

Pipeline for audio generation.

This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).

__call__

< source >

( batch_size: int = 1 num_inference_steps: int = 100 generator: Union = None audio_length_in_s: Optional = None return_dict: bool = True ) → AudioPipelineOutput or tuple

Parameters

If return_dict is True, AudioPipelineOutput is returned, otherwise a tuple is returned where the first element is a list with the generated audio.

The call function to the pipeline for generation.

Example:

from diffusers import DiffusionPipeline from scipy.io.wavfile import write

model_id = "harmonai/maestro-150k" pipe = DiffusionPipeline.from_pretrained(model_id) pipe = pipe.to("cuda")

audios = pipe(audio_length_in_s=4.0).audios

To save locally

for i, audio in enumerate(audios): write(f"maestro_test_{i}.wav", pipe.unet.sample_rate, audio.transpose())

To dislay in google colab

import IPython.display as ipd

for audio in audios: display(ipd.Audio(audio, rate=pipe.unet.sample_rate))

AudioPipelineOutput

class diffusers.AudioPipelineOutput

< source >

( audios: ndarray )

Parameters

Output class for audio pipelines.

< > Update on GitHub