Diffusers (original) (raw)
Diffusers documentation
Diffusers
Get started
🧨 Diffusers Quicktour Effective and efficient diffusion Installation
Tutorials
Overview Understanding pipelines, models and schedulers AutoPipeline Train a diffusion model
Load pipelines and adapters
Load pipelines Load community pipelines and components Load schedulers and models Model files and layouts Push files to the Hub
Adapters
Generative tasks
Unconditional image generation Text-to-image Image-to-image Inpainting Video generation Depth-to-image
Inference techniques
Overview Create a server Batch inference Distributed inference Scheduler features Pipeline callbacks Reproducible pipelines Controlling image quality Prompt techniques
Advanced inference
Hybrid Inference
Overview VAE Decode VAE Encode API Reference
Modular Diffusers
Overview Modular Pipeline Components Manager Modular Diffusers States Pipeline Block Sequential Pipeline Blocks Loop Sequential Pipeline Blocks Auto Pipeline Blocks End-to-End Example
Specific pipeline examples
ConsisID Stable Diffusion XL SDXL Turbo Kandinsky OmniGen PAG Latent Consistency Model Shap-E DiffEdit Trajectory Consistency Distillation-LoRA Stable Video Diffusion Marigold Computer Vision
Training
Overview Create a dataset for training Adapt a model to a new task
Models
Methods
Quantization Methods
Getting Started bitsandbytes gguf torchao quanto
Accelerate inference and reduce memory
Accelerate inference Caching Reduce memory usage Compile and offloading quantized models Pruna xFormers Token merging DeepCache TGATE xDiT ParaAttention
Optimized model formats
JAX/Flax ONNX OpenVINO Core ML
Optimized hardware
Metal Performance Shaders (MPS) Intel Gaudi AWS Neuron
Conceptual Guides
Philosophy Controlled generation How to contribute? Diffusers' Ethical Guidelines Evaluating Diffusion Models
Community Projects
API
Main Classes
Loaders
Models
Pipelines
Schedulers
Internal classes