You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the free_gpu_mem option, the inpaint function does not implement a function to load the model back to the gpu.
Screenshots
No response
Additional context
Traceback (most recent call last):
File "f:\novel\invokeai\ldm\generate.py", line 492, in prompt2image
results = generator.generate(
File "f:\novel\invokeai\ldm\invoke\generator\base.py", line 98, in generate
image = make_image(x_T)
File "F:\Anaconda3\envs\invokeai\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "f:\novel\invokeai\ldm\invoke\generator\inpaint.py", line 295, in make_image
samples = sampler.decode(
File "F:\Anaconda3\envs\invokeai\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "f:\novel\invokeai\ldm\models\diffusion\sampler.py", line 365, in decode
outs = self.p_sample(
File "F:\Anaconda3\envs\invokeai\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "f:\novel\invokeai\ldm\models\diffusion\ddim.py", line 58, in p_sample
e_t = self.invokeai_diffuser.do_diffusion_step(
File "f:\novel\invokeai\ldm\models\diffusion\shared_invokeai_diffusion.py", line 107, in do_diffusion_step
unconditioned_next_x, conditioned_next_x = self.apply_standard_conditioning(x, sigma, unconditioning, conditioning)
File "f:\novel\invokeai\ldm\models\diffusion\shared_invokeai_diffusion.py", line 123, in apply_standard_conditioning
unconditioned_next_x, conditioned_next_x = self.model_forward_callback(x_twice, sigma_twice,
File "f:\novel\invokeai\ldm\models\diffusion\ddim.py", line 13, in <lambda>
model_forward_callback = lambda x, sigma, cond: self.model.apply_model(x, sigma, cond))
File "f:\novel\invokeai\ldm\models\diffusion\ddpm.py", line 1441, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "F:\Anaconda3\envs\invokeai\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "f:\novel\invokeai\ldm\models\diffusion\ddpm.py", line 2167, in forward
out = self.diffusion_model(x, t, context=cc)
File "F:\Anaconda3\envs\invokeai\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "f:\novel\invokeai\ldm\modules\diffusionmodules\openaimodel.py", line 798, in forward
emb = self.time_embed(t_emb)
File "F:\Anaconda3\envs\invokeai\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "F:\Anaconda3\envs\invokeai\lib\site-packages\torch\nn\modules\container.py", line 139, in forward
input = module(input)
File "F:\Anaconda3\envs\invokeai\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "F:\Anaconda3\envs\invokeai\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument mat1 in method wrapper_addmm)
Contact Details
No response
The text was updated successfully, but these errors were encountered:
Is there an existing issue for this?
OS
Linux
GPU
cuda
VRAM
No response
What happened?
When using the free_gpu_mem option, the inpaint function does not implement a function to load the model back to the gpu.
Screenshots
No response
Additional context
Contact Details
No response
The text was updated successfully, but these errors were encountered: