Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Reproducible 'Nonetype is not iterable', when switching checkpoints for Hiresfix, (XL->XL) will work once, will fail with the error from then on #505

Open
2 of 6 tasks
Dawgmastah opened this issue Mar 6, 2024 · 6 comments
Labels
bug Confirmed report of something that isn't working

Comments

@Dawgmastah
Copy link

Dawgmastah commented Mar 6, 2024

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

Image generation is failing the second time after trying to inference XL images with different checkpoints

Steps to reproduce the problem

To reproduce, load an XL model, then set on Hiresfix a Separate XL model
(Both can specify to use the vae, or it can be overriden, result is same)

1024x1024, hiresfix of 1.5 times
(If important, im using different loras for base inferene and hiresfix)
First image generation will work fine, including image generation like adetailer.
From the second image on, it will continue failing until the UI is reset

What should have happened?

Continue generating without issue

What browsers do you use to access the UI ?

Google Chrome

Sysinfo

sysinfo-2024-02-07-15-44.json

Console logs

[LORA] Loaded X:\STABLEDIFFUSION\AUTOMATIC11111\models\Lora\Lora\SDXLPonyStyle\persona_xl_pd_continue_4-000001.safetensors for SDXL-CLIP with 264 keys at weight 1.0 (skipped 0 keys)
To load target model SDXLClipModel
Begin to load 1 model
Reuse 1 loaded models
Moving model(s) has taken 0.30 seconds
To load target model SDXL
Begin to load 1 model
Reuse 1 loaded models
Moving model(s) has taken 0.26 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:06<00:00,  3.89it/s]
To load target model AutoencoderKL████████████████████                                 | 25/50 [00:07<00:05,  4.17it/s]
Begin to load 1 model
Moving model(s) has taken 0.04 seconds
Loading weights [45c05e1d94] from X:\STABLEDIFFUSION\stable-diffusion-webui-forge\models\Stable-diffusion\SDXL\kohakuXLDelta_rev1.safetensors
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_g.logit_scale', 'cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
loaded straight to GPU
To load target model SDXL
Begin to load 1 model
Moving model(s) has taken 0.02 seconds
Loading VAE weights specified in settings: X:\STABLEDIFFUSION\stable-diffusion-webui-forge\models\VAE\sdxl_vae.safetensors
No Image data blocks found.
To load target model SDXLClipModel
Begin to load 1 model
Moving model(s) has taken 0.41 seconds
Model loaded in 5.8s (unload existing model: 0.4s, forge load real models: 3.5s, load VAE: 0.2s, load textual inversion embeddings: 1.1s, calculate empty prompt: 0.5s).
Cleanup minimal inference memory.
tiled upscale: 100%|███████████████████████████████████████████████████████████████████| 36/36 [00:04<00:00,  8.92it/s]
To load target model AutoencoderKL
Begin to load 1 model
Moving model(s) has taken 0.03 seconds
[LORA] Loaded X:\STABLEDIFFUSION\AUTOMATIC11111\models\Lora\Lora\SDXL_Style\persona_xl_delta_rev1_2-000001.safetensors for SDXL-UNet with 788 keys at weight 1.0 (skipped 0 keys)
[LORA] Loaded X:\STABLEDIFFUSION\AUTOMATIC11111\models\Lora\Lora\SDXL_Style\persona_xl_delta_rev1_2-000001.safetensors for SDXL-CLIP with 264 keys at weight 1.0 (skipped 0 keys)
To load target model SDXLClipModel
Begin to load 1 model
Reuse 1 loaded models
Moving model(s) has taken 0.08 seconds
To load target model SDXL
Begin to load 1 model
Reuse 1 loaded models
Moving model(s) has taken 0.27 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:16<00:00,  1.49it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 50/50 [00:35<00:00,  1.49it/s]
0: 640x640 1 face, 5.0ms
Speed: 56.1ms preprocess, 5.0ms inference, 407.8ms postprocess per image at shape (1, 3, 640, 640)
To load target model SDXLClipModel
Begin to load 1 model
Moving model(s) has taken 0.09 seconds
To load target model SDXL
Begin to load 1 model
Moving model(s) has taken 0.25 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 11/11 [00:02<00:00,  4.17it/s]
Total progress: 100%|██████████████████████████████████████████████████████████████████| 50/50 [00:43<00:00,  1.15it/s]
Loading weights [821aa5537f] from X:\STABLEDIFFUSION\stable-diffusion-webui-forge\models\Stable-diffusion\SDXL\autismmixSDXL_autismmixPony.safetensors
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
loaded straight to GPU
To load target model SDXL
Begin to load 1 model
Moving model(s) has taken 0.03 seconds
Loading VAE weights specified in settings: X:\STABLEDIFFUSION\stable-diffusion-webui-forge\models\VAE\sdxl_vae.safetensors
No Image data blocks found.
To load target model SDXLClipModel
Begin to load 1 model
Moving model(s) has taken 0.43 seconds
Model loaded in 5.8s (unload existing model: 0.3s, forge load real models: 3.6s, load VAE: 0.2s, load textual inversion embeddings: 1.1s, calculate empty prompt: 0.5s).
                                                                                                                       [LORA] Loaded X:\STABLEDIFFUSION\AUTOMATIC11111\models\Lora\Lora\SDXLPonyStyle\persona_xl_pd_continue_4-000001.safetensors for SDXL-UNet with 788 keys at weight 1.0 (skipped 0 keys)
[LORA] Loaded X:\STABLEDIFFUSION\AUTOMATIC11111\models\Lora\Lora\SDXLPonyStyle\persona_xl_pd_continue_4-000001.safetensors for SDXL-CLIP with 264 keys at weight 1.0 (skipped 0 keys)
To load target model SDXL
Begin to load 1 model
Reuse 1 loaded models
Moving model(s) has taken 0.25 seconds
100%|██████████████████████████████████████████████████████████████████████████████████| 25/25 [00:06<00:00,  4.13it/s]
To load target model AutoencoderKL████████████████████                                 | 25/50 [00:06<00:06,  4.10it/s]
Begin to load 1 model
Moving model(s) has taken 0.18 seconds
Loading weights [45c05e1d94] from X:\STABLEDIFFUSION\stable-diffusion-webui-forge\models\Stable-diffusion\SDXL\kohakuXLDelta_rev1.safetensors
model_type EPS
UNet ADM Dimension 2816
Using pytorch attention in VAE
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
Using pytorch attention in VAE
extra {'cond_stage_model.clip_g.logit_scale', 'cond_stage_model.clip_l.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids'}
loaded straight to GPU
To load target model SDXL
Begin to load 1 model
Moving model(s) has taken 0.03 seconds
Loading VAE weights specified in settings: X:\STABLEDIFFUSION\stable-diffusion-webui-forge\models\VAE\sdxl_vae.safetensors
No Image data blocks found.
To load target model SDXLClipModel
Begin to load 1 model
Moving model(s) has taken 0.54 seconds
Model loaded in 6.0s (unload existing model: 0.4s, forge load real models: 3.5s, load VAE: 0.2s, load textual inversion embeddings: 1.1s, calculate empty prompt: 0.7s).
Cleanup minimal inference memory.
tiled upscale: 100%|███████████████████████████████████████████████████████████████████| 36/36 [00:03<00:00,  9.52it/s]
To load target model AutoencoderKL
Begin to load 1 model
Moving model(s) has taken 0.05 seconds
[LORA] Loaded X:\STABLEDIFFUSION\AUTOMATIC11111\models\Lora\Lora\SDXL_Style\persona_xl_delta_rev1_2-000001.safetensors for SDXL-UNet with 788 keys at weight 1.0 (skipped 0 keys)
[LORA] Loaded X:\STABLEDIFFUSION\AUTOMATIC11111\models\Lora\Lora\SDXL_Style\persona_xl_delta_rev1_2-000001.safetensors for SDXL-CLIP with 264 keys at weight 1.0 (skipped 0 keys)
To load target model SDXL
Begin to load 1 model
ERROR diffusion_model.middle_block.1.transformer_blocks.7.ff.net.0.proj.weight CUDA out of memory. Tried to allocate 50.00 MiB. GPU 0 has a total capacty of 23.99 GiB of which 0 bytes is free. Of the allocated memory 22.51 GiB is allocated by PyTorch, and 684.34 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
ERROR diffusion_model.middle_block.1.transformer_blocks.8.ff.net.0.proj.weight CUDA out of memory. Tried to allocate 50.00 MiB. GPU 0 has a total capacty of 23.99 GiB of which 0 bytes is free. Of the allocated memory 22.53 GiB is allocated by PyTorch, and 667.46 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Traceback (most recent call last):
  File "X:\STABLEDIFFUSION\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 37, in loop
    task.work()
  File "X:\STABLEDIFFUSION\stable-diffusion-webui-forge\modules_forge\main_thread.py", line 26, in work
    self.result = self.func(*self.args, **self.kwargs)
  File "X:\STABLEDIFFUSION\stable-diffusion-webui-forge\modules\txt2img.py", line 111, in txt2img_function
    processed = processing.process_images(p)
  File "X:\STABLEDIFFUSION\stable-diffusion-webui-forge\modules\processing.py", line 752, in process_images
    res = process_images_inner(p)
  File "X:\STABLEDIFFUSION\stable-diffusion-webui-forge\modules\processing.py", line 922, in process_images_inner
    samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
  File "X:\STABLEDIFFUSION\stable-diffusion-webui-forge\modules\processing.py", line 1291, in sample
    return self.sample_hr_pass(samples, decoded_samples, seeds, subseeds, subseed_strength, prompts)
  File "X:\STABLEDIFFUSION\stable-diffusion-webui-forge\modules\processing.py", line 1388, in sample_hr_pass
    samples = self.sampler.sample_img2img(self, samples, noise, self.hr_c, self.hr_uc, steps=self.hr_second_pass_steps or self.steps, image_conditioning=image_conditioning)
  File "X:\STABLEDIFFUSION\stable-diffusion-webui-forge\modules\sd_samplers_kdiffusion.py", line 145, in sample_img2img
    sampling_prepare(self.model_wrap.inner_model.forge_objects.unet, x=x)
  File "X:\STABLEDIFFUSION\stable-diffusion-webui-forge\modules_forge\forge_sampler.py", line 105, in sampling_prepare
    model_management.load_models_gpu(
  File "X:\STABLEDIFFUSION\stable-diffusion-webui-forge\ldm_patched\modules\model_management.py", line 494, in load_models_gpu
    loaded_model.model_load(async_kept_memory)
  File "X:\STABLEDIFFUSION\stable-diffusion-webui-forge\ldm_patched\modules\model_management.py", line 332, in model_load
    raise e
  File "X:\STABLEDIFFUSION\stable-diffusion-webui-forge\ldm_patched\modules\model_management.py", line 328, in model_load
    self.real_model = self.model.patch_model(device_to=patch_model_to) #TODO: do something with loras and offloading to CPU
  File "X:\STABLEDIFFUSION\stable-diffusion-webui-forge\ldm_patched\modules\model_patcher.py", line 216, in patch_model
    out_weight = self.calculate_weight(self.patches[key], temp_weight, key).to(weight.dtype)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 26.00 MiB. GPU 0 has a total capacty of 23.99 GiB of which 0 bytes is free. Of the allocated memory 22.53 GiB is allocated by PyTorch, and 666.16 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF


CUDA out of memory. Tried to allocate 26.00 MiB. GPU 0 has a total capacty of 23.99 GiB of which 0 bytes is free. Of the allocated memory 22.53 GiB is allocated by PyTorch, and 666.16 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
*** Error completing request.
.
.(Tons or arguments of the prompt)
.
.
    Traceback (most recent call last):
      File "X:\STABLEDIFFUSION\stable-diffusion-webui-forge\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
    TypeError: 'NoneType' object is not iterable

Additional information

No response

@Dawgmastah
Copy link
Author

Rolled back to:
b59deaa

Problem not present here

@fgtm2023
Copy link

fgtm2023 commented Mar 8, 2024

got the error message too with my AMD rx vega 56 gpu on ubuntu: TypeError: 'NoneType' object is not iterable , and fixed it by reverting to previous commit.

@CCpt5
Copy link

CCpt5 commented Mar 13, 2024

I have this problem also but haven't narrowed down what causes it for me. Only that it's happening enough to be frustrating.

@MysticDaedra
Copy link

Doesn't even matter if I change models or not, it always causes this error after working for a single generation. Only current solution is to restart SD Forge, after which it again works for... one generation.

@catboxanon catboxanon added the bug Confirmed report of something that isn't working label Mar 19, 2024
@retronomi
Copy link

retronomi commented Mar 19, 2024

Also have this issue and it's happening after each generation, but only with SDXL models.

@lebakerino
Copy link

I get this nonetype error with all sorts of things, sometimes changing the prompt, turning on an extension etc and often have to restart the webui or the browser, very irritating especially as it doesn't seem to be linked to one obvious thing that I can think of, that or its some buggy extension perhaps but like I say it goes away after a restart.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Confirmed report of something that isn't working
Projects
None yet
Development

No branches or pull requests

7 participants