Skip to content

Conversation

@santisbon
Copy link
Contributor

Includes documentation on how to build the Docker image and run it on a Mac with Apple chip (M1/M2).

havardgulldahl and others added 22 commits September 6, 2022 07:07
Like probably many others, I have a lot of different virtualenvs, one for each project. Most of them are handled by `pyenv`. 
After installing according to these instructions I had issues with ´pyenv`and `miniconda` fighting over the $PATH of my system.
But then I stumbled upon this nice solution on SO: https://stackoverflow.com/a/73139031 , upon which I have based my suggested changes.

It runs perfectly on my M1 setup, with the anaconda setup as a virtual environment handled by pyenv. 

Feel free to incorporate these instructions as you see fit. 

Thanks a million for all your hard work.
Co-authored-by: Henry van Megen <hvanmegen@gmail.com>
Fix: `anaconda3-latest` does not work, specify the correct virtualenv, add missing init.
…tion. (invoke-ai#482)

Tested on 8GB eGPU nvidia setup so YMMV.
512x512 output, max VRAM stays same.
…tion. (invoke-ai#482)

Tested on 8GB eGPU nvidia setup so YMMV.
512x512 output, max VRAM stays same.
@santisbon santisbon changed the title Support for creating a Docker image and running it on Mac Support for creating a Docker image and running it on Apple silicon Sep 10, 2022
@i3oc9i
Copy link

i3oc9i commented Sep 10, 2022

I have a question, AFAIK Docker on MacOs runs inside a Virual Machine, this implies reduced memory than the memory available on the host. So which is the benefit to run dream inside a docker ?

@santisbon
Copy link
Contributor Author

That's correct, Docker Desktop for Mac does it through a VM. The benefits I find are:

  • Being able to quickly spin up an instance of Stable Diffusion and see it in action without much manual configuration/troubleshooting effort.
  • Lays the foundation for setting it up in other container-based scenarios like a microservice.
  • It does not need to stay Mac-only or arm64-only. Future contributions could bring instructions for using amd64 images/requirements.

@i3oc9i
Copy link

i3oc9i commented Sep 10, 2022

Sorry, but I stiil have some doubt on the benefits

1/
conda, pyenv, and other similar stuf are useful for create separate environments, and you can easily spin up different environments whenever you need.

for example you can use a similar script to create new environments using conda.

let say you copy the following lines into the new-dream.sh file.

ENV=dream
conda env remove -n ${ENV}

rm -rf ./${ENV}
git clone https://github.com/lstein/stable-diffusion.git ${ENV}

#  replace 'development' with another branch name, or just comment out for the main branch
git checkout development

cd ${ENV}
conda clean --yes --all
mkdir -p models/ldm/stable-diffusion-v1
cd models/ldm/stable-diffusion-v1

# here change the PATH of where you keep the model
ln -s ~/Code/Ai/Stuffs/models/sd-v1-4.ckpt model.ckpt 

cd ../../..

PIP_EXISTS_ACTION=w CONDA_SUBDIR=osx-arm64 conda env create -f environment-mac.yaml -n ${ENV}

conda activate ${ENV}

then running

ENV=<env-name> source new-dream.sh

2/
micorservice is not a real use case I guess because of the interactive nature of the prompt.
Also the resulting docker image will be very large, almost 2GB, because you preload the models in the Dockerfile.

@santisbon
Copy link
Contributor Author

The microservice approach comes in handy for the --web functionality if you want to set it up as a web server or an API. Since it already includes a simplified API it is a promising area for future development.

Right now it starts to decouple the application from storage by storing the largest .ckpt in a Docker volume to avoid having it on the container but future enhancements can do this for other model files. This also sets us on the path to take advantage of other storage drivers like s3 (or other cloud storage) and continue decoupling storage and compute. I'd like to address any other concerns you might have and I'd be happy to include this in the documentation if it helps clarify the advantages/reasoning for users.

@i3oc9i
Copy link

i3oc9i commented Sep 10, 2022

Ok I see the point, but keep in mind that for now the web server is only to run in local machine, and a lot of things should be addressed to have it running on the cloud, as for example user session, login, clean user data store, upload download of images, ...

@santisbon
Copy link
Contributor Author

Definitely. This just lays the foundation to be able to do all that. Like building an API Gateway to put in front of it for authentication, rate limiting, etc. and being able to scale Stable Diffusion elastically to meet demand, reading and writing from cloud storage, etc. I see this as the first step. What do you think?

@santisbon
Copy link
Contributor Author

@tildebyte quick question: in the changes you're making to get rid of conda how are you handling arm64 machines that currently use conda environment config and/or the nomkl mutex metapackage from conda-forge? The extracted contents of the nomkl archive don't look like something you can install with pip.

I need to do the same and if this is something that has already been solved I'd like to avoid reinventing the wheel. If there's no way to do this we may need to use conda in this specific scenario.

@tildebyte
Copy link
Contributor

tildebyte commented Sep 15, 2022

@santisbon;

how are you handling ... nomkl mutex metapackage

Thanks, good catch; I'm not 😂😭

EDIT: taking a quick look... at least in this repo the only affected package should be numpy, and numpy defaults to OpenBLAS, so I don't understand why that package is even in there.

@santisbon
Copy link
Contributor Author

It might work on macoOS on arm64 with just the restrictions conda puts in place with its env variable and config, even without nomkl (haven't tested it). It does on linux on arm64 but without conda we may be out of options.

@i3oc9i
Copy link

i3oc9i commented Sep 15, 2022

Probably you dont need to install anaconda in the dicker image, because a docker container is already a separe envinoment, but may be I'm missing something ?

@santisbon
Copy link
Contributor Author

santisbon commented Sep 15, 2022

Probably you dont need to install anaconda in the dicker image, because a docker container is already a separe envinoment, but may be I'm missing something ?

The container (or any local environment) running on an arm64 chipset like Apple M2 or a Linux/Windows ARM laptop needs a way to make sure dependencies that are pulled in are the ones for the correct architecture. Conda can do this and nomkl helps too. nomkl is a conda package (actually a mutex metapackage) from the mini-forge channel.

@tildebyte
Copy link
Contributor

needs a way to make sure dependencies that are pulled in are the ones for the correct architecture

pip should do this by itself, but regardless, it can by simply setting up the requirements.txt correctly.

This is also a little bit of a straw person. The vast majority of the package requirements for this project are pure source (arch-independent). As I mentioned, in this project, the ONLY thing which nomkl does is prevent installing numpy linked against Intel's MKL library - which is already the default (i.e. pip installs the OpenBLAS cross-platform version by default).

@santisbon
Copy link
Contributor Author

@i3oc9i see more about this project's use of conda subdir and nomkl on the mac instructions and the mac reqs file. There are probably already in-depth, non-docker-related discussions you can find on those.

@tildebyte
Copy link
Contributor

@santisbon;

Probably you dont need to install anaconda in the dicker image

I would like you to test this out. I really do not believe that the only way to make this work in a Docker container on ARM is by complexifying it with conda; pip should be more than sufficient.

@santisbon
Copy link
Contributor Author

I would like you to test this out.

I'm already testing it, but no luck so far. I'll share more when I have more info.

hipsterusername and others added 7 commits September 16, 2022 17:31
* updating with Adobe instructions & assets

* Assets for Adobe guide

* correcting paths
Set temporary instructions to use the branch that can currently be
containerized.
* Added linux to the workflows

- rename workflow files

Signed-off-by: Ben Alkov <ben.alkov@gmail.com>

* fixes: run on merge to 'main', 'dev'; 

- reduce dev merge test cases to 1 (1 takes 11 minutes 😯)
- fix model cache name

Signed-off-by: Ben Alkov <ben.alkov@gmail.com>

* add test prompts to workflows

Signed-off-by: Ben Alkov <ben.alkov@gmail.com>

Signed-off-by: Ben Alkov <ben.alkov@gmail.com>
Co-authored-by: James Reynolds <magnsuviri@me.com>
@santisbon santisbon mentioned this pull request Sep 17, 2022
Copy link
Collaborator

@lstein lstein left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@lstein lstein merged commit 3bc4050 into invoke-ai:development Sep 20, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.