Train a diffusion model (original) (raw)

Unconditional image generation is a popular application of diffusion models that generates images that look like those in the dataset used for training. Typically, the best results are obtained from finetuning a pretrained model on a specific dataset. You can find many of these checkpoints on the Hub, but if you can’t find one you like, you can always train your own!

This tutorial will teach you how to train a UNet2DModel from scratch on a subset of the Smithsonian Butterflies dataset to generate your own 🦋 butterflies 🦋.

💡 This training tutorial is based on the Training with 🧨 Diffusers notebook. For additional details and context about diffusion models like how they work, check out the notebook!

Before you begin, make sure you have 🤗 Datasets installed to load and preprocess image datasets, and 🤗 Accelerate, to simplify training on any number of GPUs. The following command will also install TensorBoard to visualize training metrics (you can also use Weights & Biases to track your training).

We encourage you to share your model with the community, and in order to do that, you’ll need to login to your Hugging Face account (create one here if you don’t already have one!). You can login from a notebook and enter your token when prompted. Make sure your token has the write role.

from huggingface_hub import notebook_login

notebook_login()

Or login in from the terminal:

Since the model checkpoints are quite large, install Git-LFS to version these large files:

!sudo apt -qq install git-lfs !git config --global credential.helper store

Training configuration

For convenience, create a TrainingConfig class containing the training hyperparameters (feel free to adjust them):

from dataclasses import dataclass

@dataclass ... class TrainingConfig: ... image_size = 128
... train_batch_size = 16 ... eval_batch_size = 16
... num_epochs = 50 ... gradient_accumulation_steps = 1 ... learning_rate = 1e-4 ... lr_warmup_steps = 500 ... save_image_epochs = 10 ... save_model_epochs = 30 ... mixed_precision = "fp16"
... output_dir = "ddpm-butterflies-128"

... push_to_hub = True
... hub_model_id = "/"
... hub_private_repo = None ... overwrite_output_dir = True
... seed = 0

config = TrainingConfig()

Load the dataset

You can easily load the Smithsonian Butterflies dataset with the 🤗 Datasets library:

from datasets import load_dataset

config.dataset_name = "huggan/smithsonian_butterflies_subset" dataset = load_dataset(config.dataset_name, split="train")

💡 You can find additional datasets from the HugGan Community Event or you can use your own dataset by creating a local ImageFolder. Set config.dataset_name to the repository id of the dataset if it is from the HugGan Community Event, or imagefolder if you’re using your own images.

🤗 Datasets uses the Image feature to automatically decode the image data and load it as a PIL.Image which we can visualize:

import matplotlib.pyplot as plt

fig, axs = plt.subplots(1, 4, figsize=(16, 4)) for i, image in enumerate(dataset[:4]["image"]): ... axs[i].imshow(image) ... axs[i].set_axis_off() fig.show()

The images are all different sizes though, so you’ll need to preprocess them first:

from torchvision import transforms

preprocess = transforms.Compose( ... [ ... transforms.Resize((config.image_size, config.image_size)), ... transforms.RandomHorizontalFlip(), ... transforms.ToTensor(), ... transforms.Normalize([0.5], [0.5]), ... ] ... )

Use 🤗 Datasets’ set_transform method to apply the preprocess function on the fly during training:

def transform(examples): ... images = [preprocess(image.convert("RGB")) for image in examples["image"]] ... return {"images": images}

dataset.set_transform(transform)

Feel free to visualize the images again to confirm that they’ve been resized. Now you’re ready to wrap the dataset in a DataLoader for training!

import torch

train_dataloader = torch.utils.data.DataLoader(dataset, batch_size=config.train_batch_size, shuffle=True)

Create a UNet2DModel

Pretrained models in 🧨 Diffusers are easily created from their model class with the parameters you want. For example, to create a UNet2DModel:

from diffusers import UNet2DModel

model = UNet2DModel( ... sample_size=config.image_size,
... in_channels=3,
... out_channels=3,
... layers_per_block=2,
... block_out_channels=(128, 128, 256, 256, 512, 512),
... down_block_types=( ... "DownBlock2D",
... "DownBlock2D", ... "DownBlock2D", ... "DownBlock2D", ... "AttnDownBlock2D",
... "DownBlock2D", ... ), ... up_block_types=( ... "UpBlock2D",
... "AttnUpBlock2D",
... "UpBlock2D", ... "UpBlock2D", ... "UpBlock2D", ... "UpBlock2D", ... ), ... )

It is often a good idea to quickly check the sample image shape matches the model output shape:

sample_image = dataset[0]["images"].unsqueeze(0) print("Input shape:", sample_image.shape) Input shape: torch.Size([1, 3, 128, 128])

print("Output shape:", model(sample_image, timestep=0).sample.shape) Output shape: torch.Size([1, 3, 128, 128])

Great! Next, you’ll need a scheduler to add some noise to the image.

Create a scheduler

The scheduler behaves differently depending on whether you’re using the model for training or inference. During inference, the scheduler generates image from the noise. During training, the scheduler takes a model output - or a sample - from a specific point in the diffusion process and applies noise to the image according to a noise schedule and an update rule.

Let’s take a look at the DDPMScheduler and use the add_noise method to add some random noise to the sample_image from before:

import torch from PIL import Image from diffusers import DDPMScheduler

noise_scheduler = DDPMScheduler(num_train_timesteps=1000) noise = torch.randn(sample_image.shape) timesteps = torch.LongTensor([50]) noisy_image = noise_scheduler.add_noise(sample_image, noise, timesteps)

Image.fromarray(((noisy_image.permute(0, 2, 3, 1) + 1.0) * 127.5).type(torch.uint8).numpy()[0])

The training objective of the model is to predict the noise added to the image. The loss at this step can be calculated by:

import torch.nn.functional as F

noise_pred = model(noisy_image, timesteps).sample loss = F.mse_loss(noise_pred, noise)

Train the model

By now, you have most of the pieces to start training the model and all that’s left is putting everything together.

First, you’ll need an optimizer and a learning rate scheduler:

from diffusers.optimization import get_cosine_schedule_with_warmup

optimizer = torch.optim.AdamW(model.parameters(), lr=config.learning_rate) lr_scheduler = get_cosine_schedule_with_warmup( ... optimizer=optimizer, ... num_warmup_steps=config.lr_warmup_steps, ... num_training_steps=(len(train_dataloader) * config.num_epochs), ... )

Then, you’ll need a way to evaluate the model. For evaluation, you can use the DDPMPipeline to generate a batch of sample images and save it as a grid:

from diffusers import DDPMPipeline from diffusers.utils import make_image_grid import os

def evaluate(config, epoch, pipeline): ...
...
... images = pipeline( ... batch_size=config.eval_batch_size, ... generator=torch.Generator(device='cpu').manual_seed(config.seed), ... ).images

...
... image_grid = make_image_grid(images, rows=4, cols=4)

...
... test_dir = os.path.join(config.output_dir, "samples") ... os.makedirs(test_dir, exist_ok=True) ... image_grid.save(f"{test_dir}/{epoch:04d}.png")

Now you can wrap all these components together in a training loop with 🤗 Accelerate for easy TensorBoard logging, gradient accumulation, and mixed precision training. To upload the model to the Hub, write a function to get your repository name and information and then push it to the Hub.

💡 The training loop below may look intimidating and long, but it’ll be worth it later when you launch your training in just one line of code! If you can’t wait and want to start generating images, feel free to copy and run the code below. You can always come back and examine the training loop more closely later, like when you’re waiting for your model to finish training. 🤗

from accelerate import Accelerator from huggingface_hub import create_repo, upload_folder from tqdm.auto import tqdm from pathlib import Path import os

def train_loop(config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler): ...
... accelerator = Accelerator( ... mixed_precision=config.mixed_precision, ... gradient_accumulation_steps=config.gradient_accumulation_steps, ... log_with="tensorboard", ... project_dir=os.path.join(config.output_dir, "logs"), ... ) ... if accelerator.is_main_process: ... if config.output_dir is not None: ... os.makedirs(config.output_dir, exist_ok=True) ... if config.push_to_hub: ... repo_id = create_repo( ... repo_id=config.hub_model_id or Path(config.output_dir).name, exist_ok=True ... ).repo_id ... accelerator.init_trackers("train_example")

...
...
...
... model, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( ... model, optimizer, train_dataloader, lr_scheduler ... )

... global_step = 0

...
... for epoch in range(config.num_epochs): ... progress_bar = tqdm(total=len(train_dataloader), disable=not accelerator.is_local_main_process) ... progress_bar.set_description(f"Epoch {epoch}")

... for step, batch in enumerate(train_dataloader): ... clean_images = batch["images"] ...
... noise = torch.randn(clean_images.shape, device=clean_images.device) ... bs = clean_images.shape[0]

...
... timesteps = torch.randint( ... 0, noise_scheduler.config.num_train_timesteps, (bs,), device=clean_images.device, ... dtype=torch.int64 ... )

...
...
... noisy_images = noise_scheduler.add_noise(clean_images, noise, timesteps)

... with accelerator.accumulate(model): ...
... noise_pred = model(noisy_images, timesteps, return_dict=False)[0] ... loss = F.mse_loss(noise_pred, noise) ... accelerator.backward(loss)

... if accelerator.sync_gradients: ... accelerator.clip_grad_norm_(model.parameters(), 1.0) ... optimizer.step() ... lr_scheduler.step() ... optimizer.zero_grad()

... progress_bar.update(1) ... logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0], "step": global_step} ... progress_bar.set_postfix(**logs) ... accelerator.log(logs, step=global_step) ... global_step += 1

...
... if accelerator.is_main_process: ... pipeline = DDPMPipeline(unet=accelerator.unwrap_model(model), scheduler=noise_scheduler)

... if (epoch + 1) % config.save_image_epochs == 0 or epoch == config.num_epochs - 1: ... evaluate(config, epoch, pipeline)

... if (epoch + 1) % config.save_model_epochs == 0 or epoch == config.num_epochs - 1: ... if config.push_to_hub: ... upload_folder( ... repo_id=repo_id, ... folder_path=config.output_dir, ... commit_message=f"Epoch {epoch}", ... ignore_patterns=["step_*", "epoch_*"], ... ) ... else: ... pipeline.save_pretrained(config.output_dir)

Phew, that was quite a bit of code! But you’re finally ready to launch the training with 🤗 Accelerate’s notebook_launcher function. Pass the function the training loop, all the training arguments, and the number of processes (you can change this value to the number of GPUs available to you) to use for training:

from accelerate import notebook_launcher

args = (config, model, noise_scheduler, optimizer, train_dataloader, lr_scheduler)

notebook_launcher(train_loop, args, num_processes=1)

Once training is complete, take a look at the final 🦋 images 🦋 generated by your diffusion model!

import glob

sample_images = sorted(glob.glob(f"{config.output_dir}/samples/*.png")) Image.open(sample_images[-1])

Next steps

Unconditional image generation is one example of a task that can be trained. You can explore other tasks and training techniques by visiting the 🧨 Diffusers Training Examples page. Here are some examples of what you can learn:

< > Update on GitHub