Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

M1 MacOS Install - Cuda Error when trying to run Karlo #6

Open
jwooldridge234 opened this issue Jan 18, 2023 · 2 comments
Open

M1 MacOS Install - Cuda Error when trying to run Karlo #6

jwooldridge234 opened this issue Jan 18, 2023 · 2 comments
Labels
enhancement New feature or request

Comments

@jwooldridge234
Copy link

jwooldridge234 commented Jan 18, 2023

Hi there!

Getting the following error after following the macos/linux install in the ReadMe:

Traceback (most recent call last): File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 565, in _run_script exec(code, module.__dict__) File "/Users/jackwooldridge/StableDiffusion/stable-karlo/app.py", line 143, in <module> main() File "/Users/jackwooldridge/StableDiffusion/stable-karlo/app.py", line 104, in main images = generate( File "/Users/jackwooldridge/StableDiffusion/stable-karlo/models/generate.py", line 83, in generate pipe = make_pipeline_generator(cpu=cpu) File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/streamlit/runtime/legacy_caching/caching.py", line 629, in wrapped_func return get_or_create_cached_value() File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/streamlit/runtime/legacy_caching/caching.py", line 611, in get_or_create_cached_value return_value = non_optional_func(*args, **kwargs) File "/Users/jackwooldridge/StableDiffusion/stable-karlo/models/generate.py", line 42, in make_pipeline_generator pipe = pipe.to("cuda") File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/diffusers/pipeline_utils.py", line 270, in to module.to(torch_device) File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 989, in to return self._apply(convert) File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 641, in _apply module._apply(fn) File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 641, in _apply module._apply(fn) File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 664, in _apply param_applied = fn(param) File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/torch/nn/modules/module.py", line 987, in convert return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking) File "/Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/torch/cuda/__init__.py", line 221, in _lazy_init raise AssertionError("Torch not compiled with CUDA enabled") AssertionError: Torch not compiled with CUDA enabled

Systems specs:

`DataType SPHardwareDataType
Software:

System Software Overview:

  System Version: macOS 13.0.1 (22A400)
  Kernel Version: Darwin 22.1.0
  Boot Volume: Macintosh HD
  Boot Mode: Normal
  Computer Name: Jack’s MacBook Pro
  User Name: Jack Wooldridge (jackwooldridge)
  Secure Virtual Memory: Enabled
  System Integrity Protection: Enabled

Hardware:

Hardware Overview:

  Model Name: MacBook Pro
  Model Identifier: MacBookPro18,3
  Model Number: MKGP3LL/A
  Chip: Apple M1 Pro
  Total Number of Cores: 8 (6 performance and 2 efficiency)
  Memory: 16 GB
  System Firmware Version: 8419.41.10
  OS Loader Version: 8419.41.10

`

I tried updating the generator file to run on MPS, but this causes fatal errors and crashes the application.

Entirely possible I'm missing something that's glaringly obvious, but can't think of anything at the moment.

@kpthedev
Copy link
Owner

Unfortunately I don't have an M1 Mac to test, but I believe you just have to change all the "cuda" device references in generate.py to "mps". That would be on lines 42, 70, 84, 105.

I actually just updated the repo and removed another CUDA reference, so make sure you pull the latest commit. Hope that helps!

@jwooldridge234
Copy link
Author

Unfortunately still breaks. Honestly I think this might be an issue with Karlo on MPS, as I remember having this problem on the command line too. I replaced "cuda" with "mps" in the generator file (removed "torch.cuda.empty_cache()" as mps doesn't have an equivalent to this, but it threw the same error either way:

Fetching 20 files: 100%|█████████████████████| 20/20 [00:00<00:00, 17494.49it/s] /Users/jackwooldridge/StableDiffusion/stable-karlo/.env/lib/python3.9/site-packages/diffusers/pipelines/unclip/pipeline_unclip.py:144: UserWarning: The operator 'aten::repeat_interleave.self_int' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/mps/MPSFallback.mm:11.) text_embeddings = text_embeddings.repeat_interleave(num_images_per_prompt, dim=0) 100%|███████████████████████████████████████████| 25/25 [00:13<00:00, 1.86it/s] Assertion failed: (vals.isSplat()), function getConstantValue, file CommonRuntimeUtils.h, line 58. zsh: abort streamlit run app.py (.env) (base) jackwooldridge@Jacks-MBP stable-karlo % /Users/jackwooldridge/miniconda3/lib/python3.9/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 1 leaked semaphore objects to clean up at shutdown warnings.warn('resource_tracker: There appear to be %d

I remember managing to get the diffusers version running on the cpu (switching the dtype away from half/float16) but couldn't get it to render anything but garbled noise. No pressure to fix this, as I don't think it's an easy lift.

@kpthedev kpthedev added the enhancement New feature or request label Jan 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants