Skip to content

Latest commit

 

History

History
51 lines (39 loc) · 1.65 KB

installation_xOS.md

File metadata and controls

51 lines (39 loc) · 1.65 KB

Linux

Please follow the primary README.md of this repo.

Windows

Windows users may stumble when installing the package triton. You can choose to run on CPU without xformers and triton installed.

To use CUDA, please refer to issue#24 to try solve the problem of triton installation.

MacOS

You can try to set up according to the following steps to use CPU or MPS device.

  1. Install torch (Preview/Nighly version).

    # MPS acceleration is available on MacOS 12.3+
    pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/cpu

    Check more details in official document.

  2. Package triton and xformers is not needed since they work with CUDA. Remove the related packages.

    Your requirements.txt should look like:

    # requirements.txt
    pytorch_lightning==1.4.2
    einops
    open-clip-torch
    omegaconf
    torchmetrics==0.6.0
    opencv-python-headless
    scipy
    matplotlib
    lpips
    gradio
    chardet
    transformers
    facexlib
    pip install -r requirements.txt
  3. Run the inference script and specify --device cpu or --device mps. Using MPS can accelarate your inference.

    You can specify --tiled and related arguments to avoid OOM.