SD XL support by AUTOMATIC1111 · Pull Request #11757 · AUTOMATIC1111/stable-diffusion-webui (original) (raw)

@AUTOMATIC1111

Description

This branch has now been merged into dev. If you are on sdxl branch, use git switch sdxl to switch to get latest dev updates.
To get the dev branch in a new webui installation:

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git
cd stable-diffusion-webui
git switch dev
webui-user.bat

Original image:
orig

Generated in webui:
00045-42

Checklist:

RangiLyu, ClashSAN, chrme, dtlnor, toyxyz, lllleow, camenduru, treksis, Turamarth14, FeelTheFonk, and 50 more reacted with thumbs up emoji Moj-X, olaxe, DeverStyle, hellturt, toyxyz, ImranBugCMG, Parkergking, art926, junhyoung-ew, rkfg, and 6 more reacted with hooray emoji camenduru, kierrs, Panchovix, AugmentedRealityCat, Michoko92, olaxe, Lenowin777, jamartin9, toyxyz, zero01101, and 17 more reacted with heart emoji yaseer-13 reacted with rocket emoji

@FurkanGozukara

This comment was marked as off-topic.

@aetherwu

@AUTOMATIC1111

@Dekker3D

It would be nice to have a separate "medvram" option for this, I think. When using SD 1.5 based checkpoints, I don't need medvram, but for SDXL I'd need lowvram (if that works yet) because of my 10 gb vram.

@fuchao01

checkout b717eb7 but a black image is generated

@fuchao01

image

@AUTOMATIC1111

@AUTOMATIC1111

@AUTOMATIC1111

@Erwin11

my 30360laptop only has 6GB vram,it seems unavailable on SD XL 😂

@RoyDingZF

It shouldn't require so much VRAM to use SDXL. I have RTX3070 8G and it works well in ComfyUI generating 1024X1024

@AUTOMATIC1111

revert SD2.1 back to use the original repo add SDXL's force_zero_embeddings to negative prompt

@TomKranenburg

It shouldn't require so much VRAM to use SDXL. I have RTX3070 8G and it works well in ComfyUI generating 1024X1024

Amazing. With my lowly 1080 I thought I'd been priced out of this one.

@AUTOMATIC1111

During generation with --medvram it hovers at 7.1GB used and only jumps to ~12GB when finally making the image using VAE. But I was also able to set the memory limit using torch.cuda.set_per_process_memory_fraction to 8GB and still generate the picture fine, with sdp-no-mem optimization, so it seems ti should work on an 8GB card.

@Lenowin777

It runs slow but OK on Comfy with 6gb vram, hopefully improvements will get it to that point on A1111, since I like A1111 quite a bit more for a variety of reasons.

@evanferguson28

question: where does refiner fit in this version?

@ghost

It does seem to work for me with these arguments: "--medvram --no-half-vae", though it is insanely slow compared to comfyUI, and i am assuming the refiner doesn't work yet?

@AUTOMATIC1111

@ghost

Just to clarify and for context:

With comfyUI i had 1.8 IT/s average on my 2080, and with this as of now 1.10 IT/s avg

@AUTOMATIC1111

shadowdoggie: what cross attention optimization are you using? I get 1.5 it/s on Doggettx, but about 5 it/s with xformers and sdp, for a 1024x1024 image on 3090.

Edit:
I'm sorry I seem to have mis-reported those. I don't know how I got those results, could be a combination of debugging mode and running on pictures of different sizes.

I get 2.5it/s for doggettx and 3.0it/s for sdp-no-mem and xformers generating the picture of the cosmonaut from the first post.

@AUTOMATIC1111

@art926

shadowdoggie, I don't think you're supposed to use it with lower than 1024x1024 resolution

@ghost

shadowdoggie: what cross attention optimization are you using? I get 1.5 it/s on Doggettx, but about 5 it/s with xformers and sdp, for a 1024x1024 image on 3090.

hmm i dont see an option for xformers, how would i utilize that feature?
image

@FurkanGozukara

shadowdoggie: what cross attention optimization are you using? I get 1.5 it/s on Doggettx, but about 5 it/s with xformers and sdp, for a 1024x1024 image on 3090.

this is great speed

comfy ui is like 3 times slower than this on RTX 3090 ti on my machine

@ghost

shadowdoggie, I don't think you're supposed to use it with lower than 1024x1024 resolution

i changed that later, not sure how you possibly seen that comment of mine, cuz i deleted it

@jasoncow007

failed while genrating image with sdxl in new automatic111 1.5(1.5and2,1models works fine) ,,,,,and the error says"TypeError: must be real number, not NoneType",,,,,I thought it's associate with generative-models.

@jasoncow007

Can you please post the complete Python traceback

#12038

@CH-ZH

@slavakurilyak

@bghira

This comment was marked as outdated.

@seedlord

those links are 404. it's not out yet.

they are live

@nlienard

On SDXL breanch: trying to load 1.0 model, but whereas it was working good with my RTX 3060 12GB with 0.9, i got a memory issue while trying to load 1.0
ARG are: --xformers --no-half-vae --medvram
on DEV branch, it works well

@wzgrx

imports: 3.1s, setup codeformer: 0.2s, list SD models: 0.2s, load scripts: 9.2s, initialize extra networks: 0.1s, create ui: 2.4s, gradio launch: 3.5s, app_started_callback: 1.3s).
*** Failed reading extension data from Git repository (enhanced-img2img)
Traceback (most recent call last):
File "G:\stable-diffusion-webui\modules\extensions.py", line 79, in do_read_info_from_repo
commit = repo.head.commit
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\refs\symbolic.py", line 226, in _get_commit
obj = self._get_object()
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\refs\symbolic.py", line 219, in _get_object
return Object.new_from_sha(self.repo, hex_to_bin(self.dereference_recursive(self.repo, self.path)))
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\objects\base.py", line 94, in new_from_sha
oinfo = repo.odb.info(sha1)
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\db.py", line 40, in info
hexsha, typename, size = self._git.get_object_header(bin_to_hex(binsha))
File "G:\stable-diffusion-webui\modules\gitpython_hack.py", line 18, in get_object_header
ret = subprocess.check_output(
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 505, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1154, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1546, in _communicate
raise TimeoutExpired(self.args, orig_timeout)
subprocess.TimeoutExpired: Command '['git', 'cat-file', '--batch-check']' timed out after 2 seconds


*** Failed reading extension data from Git repository (novelai-2-local-prompt)
Traceback (most recent call last):
File "G:\stable-diffusion-webui\modules\extensions.py", line 79, in do_read_info_from_repo
commit = repo.head.commit
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\refs\symbolic.py", line 226, in _get_commit
obj = self._get_object()
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\refs\symbolic.py", line 219, in _get_object
return Object.new_from_sha(self.repo, hex_to_bin(self.dereference_recursive(self.repo, self.path)))
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\objects\base.py", line 94, in new_from_sha
oinfo = repo.odb.info(sha1)
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\db.py", line 40, in info
hexsha, typename, size = self._git.get_object_header(bin_to_hex(binsha))
File "G:\stable-diffusion-webui\modules\gitpython_hack.py", line 18, in get_object_header
ret = subprocess.check_output(
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 505, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1154, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1546, in _communicate
raise TimeoutExpired(self.args, orig_timeout)
subprocess.TimeoutExpired: Command '['git', 'cat-file', '--batch-check']' timed out after 2 seconds


*** Failed reading extension data from Git repository (openOutpaint-webUI-extension)
Traceback (most recent call last):
File "G:\stable-diffusion-webui\modules\extensions.py", line 79, in do_read_info_from_repo
commit = repo.head.commit
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\refs\symbolic.py", line 226, in _get_commit
obj = self._get_object()
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\refs\symbolic.py", line 219, in _get_object
return Object.new_from_sha(self.repo, hex_to_bin(self.dereference_recursive(self.repo, self.path)))
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\objects\base.py", line 94, in new_from_sha
oinfo = repo.odb.info(sha1)
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\db.py", line 40, in info
hexsha, typename, size = self._git.get_object_header(bin_to_hex(binsha))
File "G:\stable-diffusion-webui\modules\gitpython_hack.py", line 18, in get_object_header
ret = subprocess.check_output(
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 505, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1154, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1546, in _communicate
raise TimeoutExpired(self.args, orig_timeout)
subprocess.TimeoutExpired: Command '['git', 'cat-file', '--batch-check']' timed out after 2 seconds


*** Failed reading extension data from Git repository (prompt-fusion-extension)
Traceback (most recent call last):
File "G:\stable-diffusion-webui\modules\extensions.py", line 79, in do_read_info_from_repo
commit = repo.head.commit
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\refs\symbolic.py", line 226, in _get_commit
obj = self._get_object()
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\refs\symbolic.py", line 219, in _get_object
return Object.new_from_sha(self.repo, hex_to_bin(self.dereference_recursive(self.repo, self.path)))
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\objects\base.py", line 94, in new_from_sha
oinfo = repo.odb.info(sha1)
File "G:\stable-diffusion-webui\venv\lib\site-packages\git\db.py", line 40, in info
hexsha, typename, size = self._git.get_object_header(bin_to_hex(binsha))
File "G:\stable-diffusion-webui\modules\gitpython_hack.py", line 18, in get_object_header
ret = subprocess.check_output(
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 421, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 505, in run
stdout, stderr = process.communicate(input, timeout=timeout)
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1154, in communicate
stdout, stderr = self._communicate(input, endtime, timeout)
File "C:\Users\a2212\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 1546, in _communicate
raise TimeoutExpired(self.args, orig_timeout)
subprocess.TimeoutExpired: Command '['git', 'cat-file', '--batch-check']' timed out after 2 seconds


@wzgrx

RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper_CUDA__index_select)
Time taken: 0.0 sec.

@MoonRide303

Old models still work fine with webui v1.5.1, but attempts to generate anything with SDXL (command line "--medvram --no-half-vae") end up with this:

---
*** Error completing request
*** Arguments: ('task(c87ks4kyjvgv80z)', 'whatever', '', [], 20, 0, False, False, 1, 1, 7, 0.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], <gradio.routes.Request object at 0x0000020F91E765C0>, 0, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EC07177F0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EE718D3C0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EC0715780>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020F91DB9870>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EC0834460>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EC0CC78B0>, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, 'Not set', True, True, '', '', '', '', '', 1.3, 'Not set', 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', False, 'None', None, None, False, None, None, False, None, None, False, None, None, False, None, None, False, None, None, False, 50, 10.0, 30.0, True, 0.0, 'Lanczos', 1, 0, 0, 75, 0.0001, 0.0, False, True, False, False) {}
    Traceback (most recent call last):
      File "D:\tools\Stable-Diffusion-web-UI\modules\call_queue.py", line 58, in f
        res = list(func(*args, **kwargs))
      File "D:\tools\Stable-Diffusion-web-UI\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "D:\tools\Stable-Diffusion-web-UI\modules\txt2img.py", line 62, in txt2img
        processed = processing.process_images(p)
      File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 677, in process_images
        res = process_images_inner(p)
      File "D:\tools\Stable-Diffusion-web-UI\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
        return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
      File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 783, in process_images_inner
        p.setup_conds()
      File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 1191, in setup_conds
        super().setup_conds()
      File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 364, in setup_conds
        self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
      File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 353, in get_conds_with_caching
        cache[1] = function(shared.sd_model, required_prompts, steps)
      File "D:\tools\Stable-Diffusion-web-UI\extensions\prompt-fusion-extension\lib_prompt_fusion\hijacker.py", line 15, in wrapper
        return function(*args, **kwargs, original_function=self.__original_functions[attribute])
      File "D:\tools\Stable-Diffusion-web-UI\extensions\prompt-fusion-extension\scripts\promptlang.py", line 38, in _hijacked_get_learned_conditioning
        flattened_conds = original_function(model, flattened_prompts, total_steps)
      File "D:\tools\Stable-Diffusion-web-UI\modules\prompt_parser.py", line 163, in get_learned_conditioning
        conds = model.get_learned_conditioning(texts)
      File "D:\tools\Stable-Diffusion-web-UI\modules\sd_models_xl.py", line 24, in get_learned_conditioning
        "original_size_as_tuple": torch.tensor([height, width], **devices_args).repeat(len(batch), 1),
    TypeError: must be real number, not NoneType

---

GPU: RTX 4080.

@Kadah

Disable any extention that hooks or hijacks gen when using SDXL till they are updated.

I found these so far that will cause gen to fail when using SDXL:

@eniora

SDXL 1024x1024 is taking just over a minute for me on a mere 1070 8GB, not sure why people keep saying A1111 is slow, on ComfyUI it's slower for me for some reason (a minute and a half on Comfy and a minute and 20 seconds on A1111), both using xformers. Though it's worth mentioning that on A1111 --medvram flag is a must for 8GB or lower cards when using SDXL (otherwise generating 1024x1024 can take 15 mins). @AUTOMATIC1111 can --medvram be enforced for low VRAM (8GB or less) cards (at least only when SDXL is loaded) so people stop complaining about A1111 being slow with SDXL? I think comfy does this automatically that's why you don't see people complaining about it being super slow.

I just wish the refiner process can be semi-automated on A1111, for me personally it's not a big deal because I don't really find the refiner so great TBH, sometimes it can make the image worse while only improving small parts of the image. And I think in the future when SDXL is heavily finetuned and some loras are around the refiner won't really be needed anyway.

Screenshot 2023-07-27 182139

@MoonRide303

@Kadah Thx for the hint - I've disabled prompt-fusion-extension, and it started working.

@MoonRide303

@eniora I wanted to check out the refiner model, so I learned and played a bit with ComfyUI today. Proper setup (sampler, steps, denoise strength) might vary image to image, but I find it pretty useful and able to nicely refine output from base model (from subtle changes, to more noticable style change - you can try using different or refined prompts for it). Subtle starting setup you can try is euler_ancestral, 2 steps, denoise 0.1 - it looks like that, then:
image

If I want refiner to have bigger impact, then I increase both denoise and steps for it (denoise 0.25 with 5 steps, denoise 0.5 with 10 steps, etc.). Interesting thing I've just noticed - refiner model is able not just to add the details, but also do stuff like blur background to make it look more like a portrait (without being asked for it in the prompt), like that:
image

@AUTOMATIC1111 It would be really nice to be able to use refiner model similarily in the UI of yours.

@Kadah

Link to refiner request: #11919

I think I'd like to see the refiner implemented similar to HRF, UI wise, and with options to at least save the pre-refiner output (similar to the same option to save the outputs of pre-HRF).

@VladimirNCh

SDXL 1024x1024 is taking just over a minute for me on a mere 1070 8GB, not sure why people keep saying A1111 is slow, on ComfyUI it's slower for me for some reason (a minute and a half on Comfy and a minute and 20 seconds on A1111), both using xformers. Though it's worth mentioning that on A1111 --medvram flag is a must for 8GB or lower cards when using SDXL (otherwise generating 1024x1024 can take 15 mins). @AUTOMATIC1111 can --medvram be enforced for low VRAM (8GB or less) cards (at least only when SDXL is loaded) so people stop complaining about A1111 being slow with SDXL? I think comfy does this automatically that's why you don't see people complaining about it being super slow.

I just wish the refiner process can be semi-automated on A1111, for me personally it's not a big deal because I don't really find the refiner so great TBH, sometimes it can make the image worse while only improving small parts of the image. And I think in the future when SDXL is heavily finetuned and some loras are around the refiner won't really be needed anyway.

Скриншот 2023-07-27 182139

I run SDXL_0.9 on a Quadro K620 with 2GB, I manage to do one 512x712 generation, after that webui_user needs to be restarted as there is a constant low memory error. Generation time more than 15 minutes

COMMANDLINE_ARGS= --opt-sub-quad-attention --lowvram --always-batch-cond-uncond --no-half-vae

@chdlc

Old models still work fine with webui v1.5.1, but attempts to generate anything with SDXL (command line "--medvram --no-half-vae") end up with this:

---
*** Error completing request
*** Arguments: ('task(c87ks4kyjvgv80z)', 'whatever', '', [], 20, 0, False, False, 1, 1, 7, 0.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], <gradio.routes.Request object at 0x0000020F91E765C0>, 0, False, False, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, 'LoRA', 'None', 1, 1, None, 'Refresh models', <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EC07177F0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EE718D3C0>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EC0715780>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020F91DB9870>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EC0834460>, <scripts.controlnet_ui.controlnet_ui_group.UiControlNetUnit object at 0x0000020EC0CC78B0>, False, False, False, False, '1:1,1:2,1:2', '0:0,0:0,0:1', '0.2,0.8,0.8', 20, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, 'Not set', True, True, '', '', '', '', '', 1.3, 'Not set', 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', 1.3, 'Not set', False, 'None', None, None, False, None, None, False, None, None, False, None, None, False, None, None, False, None, None, False, 50, 10.0, 30.0, True, 0.0, 'Lanczos', 1, 0, 0, 75, 0.0001, 0.0, False, True, False, False) {}
   Traceback (most recent call last):
     File "D:\tools\Stable-Diffusion-web-UI\modules\call_queue.py", line 58, in f
       res = list(func(*args, **kwargs))
     File "D:\tools\Stable-Diffusion-web-UI\modules\call_queue.py", line 37, in f
       res = func(*args, **kwargs)
     File "D:\tools\Stable-Diffusion-web-UI\modules\txt2img.py", line 62, in txt2img
       processed = processing.process_images(p)
     File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 677, in process_images
       res = process_images_inner(p)
     File "D:\tools\Stable-Diffusion-web-UI\extensions\sd-webui-controlnet\scripts\batch_hijack.py", line 42, in processing_process_images_hijack
       return getattr(processing, '__controlnet_original_process_images_inner')(p, *args, **kwargs)
     File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 783, in process_images_inner
       p.setup_conds()
     File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 1191, in setup_conds
       super().setup_conds()
     File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 364, in setup_conds
       self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
     File "D:\tools\Stable-Diffusion-web-UI\modules\processing.py", line 353, in get_conds_with_caching
       cache[1] = function(shared.sd_model, required_prompts, steps)
     File "D:\tools\Stable-Diffusion-web-UI\extensions\prompt-fusion-extension\lib_prompt_fusion\hijacker.py", line 15, in wrapper
       return function(*args, **kwargs, original_function=self.__original_functions[attribute])
     File "D:\tools\Stable-Diffusion-web-UI\extensions\prompt-fusion-extension\scripts\promptlang.py", line 38, in _hijacked_get_learned_conditioning
       flattened_conds = original_function(model, flattened_prompts, total_steps)
     File "D:\tools\Stable-Diffusion-web-UI\modules\prompt_parser.py", line 163, in get_learned_conditioning
       conds = model.get_learned_conditioning(texts)
     File "D:\tools\Stable-Diffusion-web-UI\modules\sd_models_xl.py", line 24, in get_learned_conditioning
       "original_size_as_tuple": torch.tensor([height, width], **devices_args).repeat(len(batch), 1),
   TypeError: must be real number, not NoneType

---

GPU: RTX 4080.

Looks like is the neutral prompt extension, just found out the solution on Reddit

@ClashSAN

I run SDXL_0.9 on a Quadro K620 with 2GB, I manage to do one 512x712 generation, after that webui_user needs to be restarted as there is a constant low memory error. Generation time more than 15 minutes

@VladimirNCh for larger sizes:
Try using the model with this vae: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix

Without --no-half-vae gives 66% more px

--opt-sub-quad-attention --lowvram

OR

--opt-sdp-no-mem-attention --lowvram

@chdlc

I run SDXL_0.9 on a Quadro K620 with 2GB, I manage to do one 512x712 generation, after that webui_user needs to be restarted as there is a constant low memory error. Generation time more than 15 minutes

@VladimirNCh for larger sizes: Try using the model with this vae: https://huggingface.co/madebyollin/sdxl-vae-fp16-fix

Without --no-half-vae gives 66% more px

--opt-sub-quad-attention --lowvram

OR

--opt-sdp-no-mem-attention --lowvram

Thanks for sharing, I can generate now on a GTX1350 with 4GB 😅. It's pretty slow at launch, but at least it works now...

@ClashSAN

@chdelacr, what is your maximum size?

@remystic

has anyone been able to run the SDXL model on mac m1? if the answer is yes who can help me with the settings? it generates very random things for me

@MoonRide303

@remystic If it already generates something, then first thing to check would be resolution. If you go with old defaults (512x512) it generates garbage, but it should start generating proper output after changing it to 1024x1024 (or any other compatible resolution - see the Appendix I from the SDXL paper). Aside of that - you can check out the Mac guide for SDXL from Hugging Face (based on diffusers).

@ARDEACT

I cannot load the VAEs separate file (VAE file in the folder). I get an error. Without it, SDXL loads just fine.

@eniora

@ARDEACT make sure to use sdxl_vae.safetensors and not diffusion_pytorch_model.safetensors
If you're using sdxl_vae.safetensors and still get an error, then we need to see that error to try to help you.

@markrmiller

Seems kind of strange but I can’t get anything out of an sdxl model trained with someone. I’ve tried multiple models trained with sd-scripts. Put them in comfy and use the keyword and get the subject. Put them in automatic and use the keyword and it’s the same generic scene type thing you’d get from the base model with nothing trained on that keyword.

@w-e-w

Repository owner locked as resolved and limited conversation to collaborators

Aug 2, 2023

This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.Learn more about bidirectional Unicode characters

[ Show hidden characters]({{ revealButtonHref }})