Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow generation speed #40

Open
healthyfat opened this issue Jun 15, 2024 · 5 comments
Open

Slow generation speed #40

healthyfat opened this issue Jun 15, 2024 · 5 comments

Comments

@healthyfat
Copy link

I am having a problem with very slow generation speed when using AutoCFG. Wonder if this might have anything to do with below warning or is it just my 6GB VRAM to little for this node?

Requested to load SDXL
Loading 1 new model
0%| | 0/24 [00:00<?, ?it/s]

C:\Programs\ComfyUI\python_embeded\Lib\site-packages\torchsde_brownian\brownian_interval.py:608: UserWarning: Should have tb<=t1 but got tb=14.614644050598145 and t1=14.614643.
warnings.warn(f"Should have {tb_name}<=t1 but got {tb_name}={tb} and t1={self._end}.")

25%|████████████████████▊ | 6/24 [06:50<20:29, 68.31s/it]

@Extraltodeus
Copy link
Owner

Hello! Unfortunately I can't really tell like this. You message is about something else and the iteration speed, while slow indeed, may come from anything.

@healthyfat
Copy link
Author

You right. That warning is not due to AutoCFG. I will just mention that after I cancel the process and start it again then it works normal as on example below.

Loading 1 new model
0%| | 0/24 [00:00<?, ?it/s]

C:\Programs\ComfyUI\python_embeded\Lib\site-packages\torchsde_brownian\brownian_interval.py:608:
UserWarning: Should have tb<=t1 but got tb=14.614644050598145 and t1=14.614643.
warnings.warn(f"Should have {tb_name}<=t1 but got {tb_name}={tb} and t1={self._end}.")

4%|███▍ | 1/24 [01:38<37:34, 98.04s/it]
Processing interrupted
Prompt executed in 189.60 seconds

got prompt
100%|██████████████████████████████████████████████████████████████████████████████████| 24/24 [00:52<00:00, 2.17s/it]
Requested to load AutoencoderKL
Loading 1 new model
Prompt executed in 74.84 seconds

@Extraltodeus
Copy link
Owner

Sometimes while using comfy the first time that a set of parameters is used for some reason I get a slowdown too. Now I'm not sure about your setup.

@healthyfat
Copy link
Author

Setup? You mean my hardware or Comfy? I use GeForce 2060 6GB.

@Extraltodeus
Copy link
Owner

ComfyUI can automatically switch into low vram mode.

try to start the UI with the --normalvram argument

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants