Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

having trouble getting cuda installed with this #27

Open
gsgoldma opened this issue Feb 1, 2023 · 0 comments
Open

having trouble getting cuda installed with this #27

gsgoldma opened this issue Feb 1, 2023 · 0 comments

Comments

@gsgoldma
Copy link

gsgoldma commented Feb 1, 2023

I've been using python 3.10.6. is that incompatible with this repo?

─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ D:\paint\runner.py:71 in │
│ │
│ 68 │ color_context = settings["color_context"] │
│ 69 │ input_prompt = settings["input_prompt"] │
│ 70 │ │
│ ❱ 71 │ img = paint_with_words( │
│ 72 │ │ color_context=color_context, │
│ 73 │ │ color_map_image=color_map_image, │
│ 74 │ │ input_prompt=input_prompt, │
│ │
│ D:\Python\Python310\lib\site-packages\torch\autograd\grad_mode.py:27 in decorate_context │
│ │
│ 24 │ │ @functools.wraps(func) │
│ 25 │ │ def decorate_context(*args, **kwargs): │
│ 26 │ │ │ with self.clone(): │
│ ❱ 27 │ │ │ │ return func(*args, **kwargs) │
│ 28 │ │ return cast(F, decorate_context) │
│ 29 │ │
│ 30 │ def _wrap_generator(self, func): │
│ │
│ D:\Python\Python310\lib\site-packages\torch\amp\autocast_mode.py:14 in decorate_autocast │
│ │
│ 11 │ @functools.wraps(func) │
│ 12 │ def decorate_autocast(*args, **kwargs): │
│ 13 │ │ with autocast_instance: │
│ ❱ 14 │ │ │ return func(*args, **kwargs) │
│ 15 │ decorate_autocast.__script_unsupported = '@autocast() decorator is not supported in │
│ 16 │ return decorate_autocast │
│ 17 │
│ │
│ D:\paint\paint_with_words\paint_with_words.py:255 in paint_with_words │
│ │
│ 252 ): │
│ 253 │ │
│ 254 │ vae, unet, text_encoder, tokenizer, scheduler = ( │
│ ❱ 255 │ │ pww_load_tools( │
│ 256 │ │ │ device, │
│ 257 │ │ │ scheduler_type, │
│ 258 │ │ │ local_model_path=local_model_path, │
│ │
│ D:\paint\paint_with_words\paint_with_words.py:142 in pww_load_tools │
│ │
│ 139 │ │ local_files_only=local_path_only, │
│ 140 │ ) │
│ 141 │ │
│ ❱ 142 │ vae.to(device), unet.to(device), text_encoder.to(device) │
│ 143 │ │
│ 144 │ for _module in unet.modules(): │
│ 145 │ │ if _module.class.name == "CrossAttention": │
│ │
│ D:\Python\Python310\lib\site-packages\torch\nn\modules\module.py:987 in to │
│ │
│ 984 │ │ │ │ │ │ │ non_blocking, memory_format=convert_to_format) │
│ 985 │ │ │ return t.to(device, dtype if t.is_floating_point() or t.is_complex() else No │
│ 986 │ │ │
│ ❱ 987 │ │ return self._apply(convert) │
│ 988 │ │
│ 989 │ def register_backward_hook( │
│ 990 │ │ self, hook: Callable[['Module', _grad_t, _grad_t], Union[None, Tensor]] │
│ │
│ D:\Python\Python310\lib\site-packages\torch\nn\modules\module.py:639 in _apply │
│ │
│ 636 │ │
│ 637 │ def _apply(self, fn): │
│ 638 │ │ for module in self.children(): │
│ ❱ 639 │ │ │ module._apply(fn) │
│ 640 │ │ │
│ 641 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 642 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ D:\Python\Python310\lib\site-packages\torch\nn\modules\module.py:639 in _apply │
│ │
│ 636 │ │
│ 637 │ def _apply(self, fn): │
│ 638 │ │ for module in self.children(): │
│ ❱ 639 │ │ │ module._apply(fn) │
│ 640 │ │ │
│ 641 │ │ def compute_should_use_set_data(tensor, tensor_applied): │
│ 642 │ │ │ if torch._has_compatible_shallow_copy_type(tensor, tensor_applied): │
│ │
│ D:\Python\Python310\lib\site-packages\torch\nn\modules\module.py:662 in _apply │
│ │
│ 659 │ │ │ # track autograd history of param_applied, so we have to use │
│ 660 │ │ │ # with torch.no_grad():
│ 661 │ │ │ with torch.no_grad(): │
│ ❱ 662 │ │ │ │ param_applied = fn(param) │
│ 663 │ │ │ should_use_set_data = compute_should_use_set_data(param, param_applied) │
│ 664 │ │ │ if should_use_set_data: │
│ 665 │ │ │ │ param.data = param_applied │
│ │
│ D:\Python\Python310\lib\site-packages\torch\nn\modules\module.py:985 in convert │
│ │
│ 982 │ │ │ if convert_to_format is not None and t.dim() in (4, 5): │
│ 983 │ │ │ │ return t.to(device, dtype if t.is_floating_point() or t.is_complex() els │
│ 984 │ │ │ │ │ │ │ non_blocking, memory_format=convert_to_format) │
│ ❱ 985 │ │ │ return t.to(device, dtype if t.is_floating_point() or t.is_complex() else No │
│ 986 │ │ │
│ 987 │ │ return self.apply(convert) │
│ 988 │
│ │
│ D:\Python\Python310\lib\site-packages\torch\cuda_init
.py:221 in _lazy_init │
│ │
│ 218 │ │ │ │ "Cannot re-initialize CUDA in forked subprocess. To use CUDA with " │
│ 219 │ │ │ │ "multiprocessing, you must use the 'spawn' start method") │
│ 220 │ │ if not hasattr(torch._C, '_cuda_getDeviceCount'): │
│ ❱ 221 │ │ │ raise AssertionError("Torch not compiled with CUDA enabled") │
│ 222 │ │ if _cudart is None: │
│ 223 │ │ │ raise AssertionError( │
│ 224 │ │ │ │ "libcudart functions unavailable. It looks like you have a broken build? │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AssertionError: Torch not compiled with CUDA enabled

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant