GitHub - pyladiesams/personalization-with-text-to-image-diffusion-models-feb2024: Get familiar with different fine-tuning techniques for text-to-image models, and learn how to teach a diffusion model a concept of your choosing (original) (raw)

🎨 Fine-tuning text-to-image diffusion models for personalization and subject-driven generation

Presentation: Personalization of Diffusion Models with 🧨Diffusers

📚 Workshop description

During the workshop you will get familiar with different fine-tuning techniques for text-to-image models, and learn how to easily teach a diffusion model a concept of your choosing (special style, a pet, faces, etc) with as little as 3 images depicting your concept.

🛠️ Requirements

Python >= 3.10, acquaintance with Diffusion models, Text-to-Image models.

NOTE 💡 While we will briefly go over diffusion models and specifically Stable Diffusion, we will not get into detail, and assume some familiarity with the diffusion process and architecture of stable diffusion models.

TIP 💌 If you're not familiar with diffusion models but interested in doing this workshop, check this (free & open-sourced) introductory diffusion class 🤓

▶️ Usage

🎬 Video record

Re-watch this YouTube stream

🤝 Credits

This workshop was set up by @pyladiesams and @linoytsaban