Visual artifacts when using DPM++
schedulers and SDXL without the refiner model · Issue #5433 · huggingface/diffusers (original) (raw)
Describe the bug
All DPM++ schedulers are showing visual artifacts out of the base model when denoising_end=1
(skipping the refiner). This effect is most notable with DPM++ 2M SDE
configured using the flag from the docs.
These same artifacts are not seen when using SD1.5 with the same scheduler configuration.
Reproduction
Intended to run in a notebook
import torch from diffusers import StableDiffusionXLPipeline, StableDiffusionPipeline from typing import cast from diffusers import DPMSolverMultistepScheduler
sdxl_model = cast(StableDiffusionXLPipeline, StableDiffusionXLPipeline.from_pretrained( 'stabilityai/stable-diffusion-xl-base-1.0', torch_dtype=torch.float16, use_safetensors=True, variant="fp16", )).to('cuda') sd_model = StableDiffusionPipeline.from_pretrained( "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16, revision="fp16", ).to('cuda')
common_config = {'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear'} dpmpp_2m_sde = DPMSolverMultistepScheduler(**common_config, algorithm_type="sde-dpmsolver++") sdxl_model.scheduler = dpmpp_2m_sde sd_model.scheduler = dpmpp_2m_sde
sdxl_model.watermark = None generator = torch.Generator(device='cuda') generator.manual_seed(12345)
params = { "prompt": ['a cat'], "num_inference_steps": 50, "height": 1024, "width": 1024, "guidance_scale": 7, }
sdxl_res = sdxl_model(**params, denoising_end=1.0, generator=generator) sdxl_img = sdxl_res.images[0]
generator.manual_seed(12345) sd_res = sd_model(**params, generator=generator) sd_img = sd_res.images[0]
display(sdxl_img) display(sd_img)
Logs
No response
System Info
diffusers
version: 0.21.4- Platform: Linux-5.4.0-163-generic-x86_64-with-glibc2.31
- Python version: 3.11.5
- PyTorch version (GPU?): 2.1.0+cu121 (True)
- Huggingface_hub version: 0.17.1
- Transformers version: 4.34.0
- Accelerate version: 0.22.0
- xFormers version: not installed
- Using GPU in script?:
- Using distributed or parallel set-up in script?: