Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue installing on macOS M2 #15

Closed
enzyme69 opened this issue Nov 13, 2023 · 14 comments
Closed

Issue installing on macOS M2 #15

enzyme69 opened this issue Nov 13, 2023 · 14 comments

Comments

@enzyme69
Copy link

I kept getting this error when installing requirements:

pip install -r requirements.txt
Looking in indexes: https://pypi.org/simple, https://download.pytorch.org/whl/cu121
Collecting diffusers==0.23.0 (from -r requirements.txt (line 1))
  Using cached diffusers-0.23.0-py3-none-any.whl.metadata (17 kB)
Collecting transformers==4.34.1 (from -r requirements.txt (line 2))
  Using cached transformers-4.34.1-py3-none-any.whl.metadata (121 kB)
Collecting gradio==3.50.2 (from -r requirements.txt (line 3))
  Using cached gradio-3.50.2-py3-none-any.whl.metadata (17 kB)
Requirement already satisfied: torch==2.1.0 in ./venv/lib/python3.10/site-packages (from -r requirements.txt (line 5)) (2.1.0)
Collecting fastapi==0.104.0 (from -r requirements.txt (line 6))
  Using cached fastapi-0.104.0-py3-none-any.whl.metadata (24 kB)
Collecting uvicorn==0.23.2 (from -r requirements.txt (line 7))
  Using cached uvicorn-0.23.2-py3-none-any.whl.metadata (6.2 kB)
Collecting Pillow==10.1.0 (from -r requirements.txt (line 8))
  Using cached Pillow-10.1.0-cp310-cp310-macosx_11_0_arm64.whl.metadata (9.5 kB)
Collecting accelerate==0.24.0 (from -r requirements.txt (line 9))
  Using cached accelerate-0.24.0-py3-none-any.whl.metadata (18 kB)
Collecting compel==2.0.2 (from -r requirements.txt (line 10))
  Using cached compel-2.0.2-py3-none-any.whl.metadata (12 kB)
Collecting controlnet-aux==0.0.7 (from -r requirements.txt (line 11))
  Using cached controlnet_aux-0.0.7.tar.gz (202 kB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... done
  Preparing metadata (pyproject.toml) ... done
Collecting peft==0.6.0 (from -r requirements.txt (line 12))
  Using cached peft-0.6.0-py3-none-any.whl.metadata (23 kB)
Collecting xformers (from -r requirements.txt (line 13))
  Using cached xformers-0.0.22.post7.tar.gz (3.8 MB)
  Installing build dependencies ... done
  Getting requirements to build wheel ... error
  error: subprocess-exited-with-error

  × Getting requirements to build wheel did not run successfully.
  │ exit code: 1
  ╰─> [17 lines of output]
      Traceback (most recent call last):
        File "/Users/jimmygunawan/Documents/LCMREALTIME/Real-Time-Latent-Consistency-Model/venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
          main()
        File "/Users/jimmygunawan/Documents/LCMREALTIME/Real-Time-Latent-Consistency-Model/venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
          json_out['return_val'] = hook(**hook_input['kwargs'])
        File "/Users/jimmygunawan/Documents/LCMREALTIME/Real-Time-Latent-Consistency-Model/venv/lib/python3.10/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
          return hook(config_settings)
        File "/private/var/folders/dd/6tfdfc6x5pz37mrm2msqyc9r0000gn/T/pip-build-env-7xkq2oba/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 355, in get_requires_for_build_wheel
          return self._get_build_requires(config_settings, requirements=['wheel'])
        File "/private/var/folders/dd/6tfdfc6x5pz37mrm2msqyc9r0000gn/T/pip-build-env-7xkq2oba/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 325, in _get_build_requires
          self.run_setup()
        File "/private/var/folders/dd/6tfdfc6x5pz37mrm2msqyc9r0000gn/T/pip-build-env-7xkq2oba/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 507, in run_setup
          super(_BuildMetaLegacyBackend, self).run_setup(setup_script=setup_script)
        File "/private/var/folders/dd/6tfdfc6x5pz37mrm2msqyc9r0000gn/T/pip-build-env-7xkq2oba/overlay/lib/python3.10/site-packages/setuptools/build_meta.py", line 341, in run_setup
          exec(code, locals())
        File "<string>", line 23, in <module>
      ModuleNotFoundError: No module named 'torch'
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error

× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip.
@ZhelenZ
Copy link

ZhelenZ commented Nov 13, 2023

Same error on MacOS M2

@enzyme69
Copy link
Author

Yes, I am using M2 macOS. However, try also using PINOKIO to do the install. Seems successful, but then imy case, my Mac Mini has no camera, so I am using iPhone, but it's also failing to recognize camera.

So there are few issues:

  1. Install failed if I do it through terminal or Warp. Install seems to work however, when using Pinokio and it does run the gradio, but no webcam stream.
  2. The iPhone is not recognized as camera and I cannot stream the OBS method either somewhat.

@enzyme69
Copy link
Author

I solved the webcam issue:
#10 (comment)

However, the install issue still persist. So I use Pinokio method.

@radames
Copy link
Owner

radames commented Nov 13, 2023

hi @enzyme69 just fixed it removing the xformers for OSX on the requirements, please try it again
ee4d659

@enzyme69
Copy link
Author

Still got this issue with self install.

Process SpawnProcess-1:
Traceback (most recent call last):
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
    target(sockets=sockets)
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/site-packages/uvicorn/config.py", line 473, in load
    self.loaded_app = import_from_string(self.app)
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/site-packages/uvicorn/importer.py", line 24, in import_from_string
    raise exc from None
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/Users/jimmygunawan/Documents/LCMREALTIME/Real-Time-Latent-Consistency-Model/app-controlnet.py", line 16, in <module>
    from diffusers import AutoencoderTiny, ControlNetModel
ImportError: cannot import name 'AutoencoderTiny' from 'diffusers' (/Users/jimmygunawan/.local/lib/python3.10/site-packages/diffusers/__init__.py)

Comparing it with the one running under Pinokio install -> which actually works

@radames
Copy link
Owner

radames commented Nov 13, 2023

can you please double check diffusers version installed locally? for example
image

@enzyme69
Copy link
Author

I get this:
ImportError: cannot import name 'AutoencoderTiny' from 'diffusers' (/Users/jimmygunawan/venv/lib/python3.9/site-packages/diffusers/init.py)

@enzyme69
Copy link
Author

diffusers.version
'0.16.1'

@radames
Copy link
Owner

radames commented Nov 14, 2023

ow @enzyme69 that's an very old diffusers version, please make sure you're on 0.23.0!!

@enzyme69
Copy link
Author

@radames Aha, I'll try updating, but this is what I get from your requirements install etc.

@radames
Copy link
Owner

radames commented Nov 14, 2023

it looks like you're using conda right?

@enzyme69
Copy link
Author

enzyme69 commented Nov 14, 2023

I might use conda, but I also make venv.

Previously, I have this python environment mixed up (I am not a coder, but I follow the instructions manual).

ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
imaginairy 12.0.2 requires torch<2.0.0,>=1.13.1, but you have torch 2.1.0 which is incompatible.
Successfully installed Pillow-10.1.0 accelerate-0.24.0 aiofiles-23.2.1 altair-5.1.2 annotated-types-0.6.0 anyio-3

@enzyme69
Copy link
Author

And once again running, still getting same issue.

I thought that I have resolve the versioning. Weird.

Process SpawnProcess-1:
Traceback (most recent call last):
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
    target(sockets=sockets)
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/site-packages/uvicorn/server.py", line 61, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
    return future.result()
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/site-packages/uvicorn/server.py", line 68, in serve
    config.load()
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/site-packages/uvicorn/config.py", line 473, in load
    self.loaded_app = import_from_string(self.app)
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/site-packages/uvicorn/importer.py", line 24, in import_from_string
    raise exc from None
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/Users/jimmygunawan/miniconda3/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/Users/jimmygunawan/Documents/LCMREALTIME/Real-Time-Latent-Consistency-Model/app-img2img.py", line 16, in <module>
    from diffusers import AutoPipelineForImage2Image, AutoencoderTiny
ImportError: cannot import name 'AutoencoderTiny' from 'diffusers' (/Users/jimmygunawan/.local/lib/python3.10/site-packages/diffusers/__init__.py)

@enzyme69
Copy link
Author

I use Warp AI, and did some trouble shooting on my conda, which seems to be affecting my env....

conda update conda
conda install diffusers
conda list --revisions

Now it seems to be working!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants