Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How I got this repo (entirely) running with pip (no conda) on M1 Mac #302

Closed
stillmatic opened this issue Sep 1, 2022 · 23 comments
Closed

Comments

@stillmatic
Copy link

stillmatic commented Sep 1, 2022

Update: everything is working great

  1. Follow instructions in https://replicate.com/blog/run-stable-diffusion-on-m1-mac to get your environment setup, but of course clone this repo instead.
  2. You should use Python 3.10, not the suggested 3.9 (there is a bug with TypingAlias, as discussed in this thread)
  3. Apply chore: update requirements.txt to run on m1 w/pip #337 or check out my fork directly to install the updates.
  4. pip install -r requirements.txt; pip install -e .
  5. PYTORCH_ENABLE_MPS_FALLBACK=1 python scripts/dream.py --full_precision -A plms --web

You should see a warning >> cuda not available, using device mps - this is good! On fully loaded M1 Max, this is 1.5 it/s on keuler or 3.91it/s on PLMS.


I got various errors attempting to run the main repo as is (commit 3ee82d8), with pip install instead of conda install. this laptop doesn't have conda on it at all.

workaround is to setup a python env with 3.10 and update requirements.txt

-numpy==1.19.2
+numpy==1.23.1
+--pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu
 omegaconf==2.1.1
-opencv-python==4.1.2.30
+opencv-python==4.6.0.66

also need to run pip install -e . to find ldm.

this enables me to run scripts and get output:

python ./scripts/orig_scripts/txt2img.py --prompt "ocean" --ddim_steps 5 --n_samples 1 --n_iter 1

grid-0000

✨ ✨ ✨ ✨ (output isn't great, but it works!)

however, dream.py has trouble running, even with the full_precision flag. that shows some errors like:

RuntimeError: expected scalar type BFloat16 but found Float

searching issues, I see that this is possibly due to hardcoded assumptions of CPU vs GPU (see #44 (comment)), internally somewhere the code expects a half-precision for GPU vs full-precision for CPU. since other scripts work but dream.py doesn't work, is there somewhere in dream.py with similar hard coded assumptions that we need to update? -- this is fixed in #319


everything runs great now on M1, though I have to set env var PYTORCH_ENABLE_MPS_FALLBACK=1. so, running

PYTORCH_ENABLE_MPS_FALLBACK=1 python scripts/dream.py --full_precision -A plms --web

works great!

@ricardobeat
Copy link

Repo was working before the 'toffaletti-dream-m1 branch was merged, now I also got errors in master:

cannot import name 'TypeAlias' from 'typing'

The typing package only had TypeAlias added in 3.10. Looks like this branch expects python 3.10 but the conda environment has not been upgraded to match.

@stillmatic
Copy link
Author

@ricardobeat would updating that import to use typing_extensions help, it should backport TypeAlias

though, yes, it is quite confusing if you should use 3.9 or 3.10, just generally. I tried to use 3.9 and ran into trouble installing opencv-python, but things are mostly working with 3.10 now.

@jordanfbrown
Copy link

I ran into the TypeAlias issue on Python 3.9 and importing it from typing_extensions got me past that issue:

from typing import Optional, Callable
from typing_extensions import TypeAlias

@yousifa
Copy link

yousifa commented Sep 1, 2022

I ran into the TypeAlias issue on Python 3.9 and importing it from typing_extensions got me past that issue:

from typing import Optional, Callable
from typing_extensions import TypeAlias

Same. Will submit PR for this.

@donut
Copy link

donut commented Sep 1, 2022

from typing import Optional, Callable
from typing_extensions import TypeAlias

@jordanfbrown Where should I put this? I tried at various places in [scripts/dream.py] to no avail.

@stillmatic
Copy link
Author

for the TypeAlias issue - could any of you post a fuller stacktrace? I actually don't even see TypeAlias in the main repo at all (and I am on Python 3.10, running txt2img.py without issue).

@stillmatic stillmatic changed the title some notes on pip installing on M1 Mac (80% working?) How I got this repo (mostly) running with pip (no conda) on M1 Mac Sep 1, 2022
@yousifa
Copy link

yousifa commented Sep 1, 2022

its in the k-diffusion dependency in "src/k-diffusion/k_diffusion/sampling.py"

@jordanfbrown
Copy link

As @yousifa mentioned, it's coming from this line: https://github.com/Birch-san/k-diffusion/blob/mps/k_diffusion/sampling.py#L10

Full stack trace on Python 3.9:

Traceback (most recent call last):
  File "/Users/jordan/stable-diffusion/scripts/dream.py", line 553, in <module>
    main()
  File "/Users/jordan/stable-diffusion/scripts/dream.py", line 41, in main
    from ldm.simplet2i import T2I
  File "/Users/jordan/stable-diffusion/ldm/simplet2i.py", line 29, in <module>
    from ldm.models.diffusion.ksampler import KSampler
  File "/Users/jordan/stable-diffusion/ldm/models/diffusion/ksampler.py", line 2, in <module>
    import k_diffusion as K
  File "/Users/jordan/stable-diffusion/src/k-diffusion/k_diffusion/__init__.py", line 1, in <module>
    from . import augmentation, config, evaluation, external, gns, layers, models, sampling, utils
  File "/Users/jordan/stable-diffusion/src/k-diffusion/k_diffusion/external.py", line 6, in <module>
    from . import sampling, utils
  File "/Users/jordan/stable-diffusion/src/k-diffusion/k_diffusion/sampling.py", line 10, in <module>
    from typing import Optional, Callable, TypeAlias
ImportError: cannot import name 'TypeAlias' from 'typing' (/Users/jordan/opt/miniconda3/envs/ldm/lib/python3.9/typing.py)

@yousifa
Copy link

yousifa commented Sep 1, 2022

It seems that this typing import is being added when installing the dependency as it is not in the k-diffusion repo either in sampling.py

@stillmatic
Copy link
Author

stillmatic commented Sep 1, 2022

opened PR in the upstream repo: Birch-san/k-diffusion#1

could also try replacing the dep in environment-mac.yaml with my fork

- -e git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k-diffusion
+ -e git+https://github.com/stillmatic/k-diffusion.git@patch-1#egg=k-diffusion

@yousifa
Copy link

yousifa commented Sep 1, 2022

maybe we should add a patch to be applied after dependency install on this repo and pin k-diffusion, its at head right now

@irrg
Copy link

irrg commented Sep 2, 2022

opened PR in the upstream repo: Birch-san/k-diffusion#1

could also try replacing the dep in environment-mac.yaml with my fork

- -e git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k-diffusion
+ -e git+https://github.com/stillmatic/k-diffusion.git@patch-1#egg=k-diffusion

@stillmatic with this edit, I just get an infinite spinner on Installing pip dependencies:

my PIP block in environment-mac.yaml looks like this:

  - pip:
    - invisible-watermark
    - test-tube
    - tokenizers
    - torch-fidelity
    - -e git+https://github.com/huggingface/diffusers.git@v0.2.4#egg=diffusers
    - -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
    - -e git+https://github.com/openai/CLIP.git@main#egg=clip
    - -e git+https://github.com/stillmatic/k-diffusion.git@patch-1#egg=k-diffusion    
    - -e .

PEBKAC? Did I mess up the edit?

@yousifa
Copy link

yousifa commented Sep 2, 2022

opened PR in the upstream repo: Birch-san/k-diffusion#1
could also try replacing the dep in environment-mac.yaml with my fork

- -e git+https://github.com/Birch-san/k-diffusion.git@mps#egg=k-diffusion
+ -e git+https://github.com/stillmatic/k-diffusion.git@patch-1#egg=k-diffusion

@stillmatic with this edit, I just get an infinite spinner on Installing pip dependencies:

my PIP block in environment-mac.yaml looks like this:

  - pip:
    - invisible-watermark
    - test-tube
    - tokenizers
    - torch-fidelity
    - -e git+https://github.com/huggingface/diffusers.git@v0.2.4#egg=diffusers
    - -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
    - -e git+https://github.com/openai/CLIP.git@main#egg=clip
    - -e git+https://github.com/stillmatic/k-diffusion.git@patch-1#egg=k-diffusion    
    - -e .

PEBKAC? Did I mess up the edit?

Just tried it. Worked for me. Maybe you messed up the formatting in the file.

@jeffomatic
Copy link

with this edit, I just get an infinite spinner on Installing pip dependencies:

I'm also observing this behavior as well.

@yousifa
Copy link

yousifa commented Sep 2, 2022

with this edit, I just get an infinite spinner on Installing pip dependencies:

I'm also observing this behavior as well.

Have you tried adding the changes in the file after installation?

@jeffomatic
Copy link

Have you tried adding the changes in the file after installation?

Not sure if I follow? The change in question is in a list of pip dependencies, so it wouldn't be effective unless it was applied before attempting to install those dependencies.

@yousifa
Copy link

yousifa commented Sep 2, 2022

Have you tried adding the changes in the file after installation?

Not sure if I follow? The change in question is in a list of pip dependencies, so it wouldn't be effective unless it was applied before attempting to install those dependencies.

After normal installation, go to the file "src/k-diffusion/k_diffusion/sampling.py"

change:
from typing import Optional, Callable, TypeAlias

to:

from typing import Optional, Callable
from typing_extensions import TypeAlias

@jeffomatic
Copy link

I don't understand conda or pip well enough to identify root cause, but there are some reports of conda getting stuck on installing pip dependencies. @stillmatic's diff looks pretty innocuous, but I guess there could be some issue with introducing typing_extensions as a new dependency.

At any rate, the workaround I came up with is as follows:

  1. Remove the k-diffusion dependency from environment-mac.yaml
  2. Run the following as documented:
    CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yaml
    conda activate ldm
    
  3. Run pip install git+https://github.com/stillmatic/k-diffusion.git@patch-1#egg=k-diffusion

@magnusviri
Copy link
Contributor

@yousifa when conda env create hangs you need to use export PIP_EXISTS_ACTION=w or just remove src/k-diffusion. It's discussed here.

@irrg
Copy link

irrg commented Sep 2, 2022

  1. Run pip install git+https://github.com/stillmatic/k-diffusion.git@patch-1#egg=k-diffusion

@jeffomatic getting 'no matches found' on #3. Maybe that's the root of my issue?

BTW, to all, wanted to mention I appreciate all the lively helpful feedback in this thread. A+.

@jeffomatic
Copy link

jeffomatic commented Sep 2, 2022

Here's what's working for me right now:

  1. Apply this diff to environment-mac.yaml.

  2. When creating the conda environment, prefix with PIP_EXISTS_ACTION=w, as @magnusviri and others have noted:

    PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yaml
    

@irrg Probably a moot point given the above, but I wasn't able to repro the "no matches found" issue you ran into. Here's what it looks like from my terminal:

% pip install git+https://github.com/stillmatic/k-diffusion.git@patch-1#egg=k-diffusion
Collecting k-diffusion
  Cloning https://github.com/stillmatic/k-diffusion.git (to revision patch-1) to /private/var/folders/th/whnv_pds4l59z7zswqwm53j40000gn/T/pip-install-2gtzagrz/k-diffusion_f5535acccbfb436788d8bc6fdd0b6eb8
  Running command git clone --filter=blob:none --quiet https://github.com/stillmatic/k-diffusion.git /private/var/folders/th/whnv_pds4l59z7zswqwm53j40000gn/T/pip-install-2gtzagrz/k-diffusion_f5535acccbfb436788d8bc6fdd0b6eb8
  Running command git checkout -b patch-1 --track origin/patch-1
  Switched to a new branch 'patch-1'
  Branch 'patch-1' set up to track remote branch 'patch-1' from 'origin'.
  Resolved https://github.com/stillmatic/k-diffusion.git to commit bd00ffefb6e7212806e1653fc2a60a35618e918d
  ...

@FrenchBen
Copy link

@stillmatic some of the steps have been replicated, and work quite well on an M1 mac:

https://replicate.com/blog/run-stable-diffusion-on-m1-mac

@stillmatic
Copy link
Author

@FrenchBen yepp- that's a great guide and I followed it to get setup on their fork, this is what I had to do to similarly get this fork running.

magnusviri referenced this issue Sep 3, 2022
I'm using stable-diffusion on a 2022 Macbook M2 Air with 24 GB unified memory.
I see this taking about 2.0s/it.

I've moved many deps from pip to conda-forge, to take advantage of the
precompiled binaries. Some notes for Mac users, since I've seen a lot of
confusion about this:

One doesn't need the `apple` channel to run this on a Mac-- that's only
used by `tensorflow-deps`, required for running tensorflow-metal. For
that, I have an example environment.yml here:

https://developer.apple.com/forums/thread/711792?answerId=723276022#723276022

However, the `CONDA_ENV=osx-arm64` environment variable *is* needed to
ensure that you do not run any Intel-specific packages such as `mkl`,
which will fail with [cryptic errors](CompVis/stable-diffusion#25 (comment))
on the ARM architecture and cause the environment to break.

I've also added a comment in the env file about 3.10 not working yet.
When it becomes possible to update, those commands run on an osx-arm64
machine should work to determine the new version set.

Here's what a successful run of dream.py should look like:

```
$ python scripts/dream.py --full_precision                                                                                                           SIGABRT(6) ↵  08:42:59
* Initializing, be patient...

Loading model from models/ldm/stable-diffusion-v1/model.ckpt
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
making attention of type 'vanilla' with 512 in_channels
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
making attention of type 'vanilla' with 512 in_channels
Using slower but more accurate full-precision math (--full_precision)
>> Setting Sampler to k_lms
model loaded in 6.12s

* Initialization done! Awaiting your command (-h for help, 'q' to quit)
dream> "an astronaut riding a horse"
Generating:   0%|                                                                                                                                                                         | 0/1 [00:00<?, ?it/s]/Users/corajr/Documents/lstein/ldm/modules/embedding_manager.py:152: UserWarning: The operator 'aten::nonzero' is not currently supported on the MPS backend and will fall back to run on the CPU. This may have performance implications. (Triggered internally at /Users/runner/work/_temp/anaconda/conda-bld/pytorch_1662016319283/work/aten/src/ATen/mps/MPSFallback.mm:11.)
  placeholder_idx = torch.where(
100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 50/50 [01:37<00:00,  1.95s/it]
Generating: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [01:38<00:00, 98.55s/it]
Usage stats:
   1 image(s) generated in 98.60s
   Max VRAM used for this generation: 0.00G
Outputs:
outputs/img-samples/000001.1525943180.png: "an astronaut riding a horse" -s50 -W512 -H512 -C7.5 -Ak_lms -F -S1525943180
```
@stillmatic stillmatic changed the title How I got this repo (mostly) running with pip (no conda) on M1 Mac How I got this repo (entirely) running with pip (no conda) on M1 Mac Sep 3, 2022
@lstein lstein closed this as completed Sep 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants