diff --git a/README.md b/README.md index f38ea6cd..cc8e617b 100644 --- a/README.md +++ b/README.md @@ -23,9 +23,13 @@ FastFold provides a **high-performance implementation of Evoformer** with the fo You will need Python 3.8 or later and [NVIDIA CUDA](https://developer.nvidia.com/cuda-downloads) 11.1 or above when you are installing from source. +```shell +git clone https://github.com/hpcaitech/FastFold +cd FastFold +``` We highly recommend installing an Anaconda or Miniconda environment and install PyTorch with conda: -``` +```shell conda env create --name=fastfold -f environment.yml conda activate fastfold bash scripts/patch_openmm.sh @@ -34,8 +38,6 @@ bash scripts/patch_openmm.sh You can get the FastFold source and install it with setuptools: ```shell -git clone https://github.com/hpcaitech/FastFold -cd FastFold python setup.py install ``` @@ -56,6 +58,11 @@ from fastfold.distributed import init_dap init_dap(args.dap_size) ``` +### Download the dataset +You can down the dataset used to train FastFold by the script `download_all_data.sh`: + + ./scripts/download_all_data.sh data/ + ### Inference You can use FastFold with `inject_fastnn`. This will replace the evoformer from OpenFold with the high performance evoformer from FastFold. diff --git a/environment.yml b/environment.yml index bdb980ed..a3addd15 100644 --- a/environment.yml +++ b/environment.yml @@ -16,7 +16,9 @@ dependencies: - typing-extensions==3.10.0.2 - einops - colossalai - - pytorch::pytorch=1.11.0 + - --find-links https://download.pytorch.org/whl/cu113/torch_stable.html torch==1.11.1+cu113 + - --find-links https://download.pytorch.org/whl/cu113/torch_stable.html torchaudio==0.11.1+cu113 + - --find-links https://download.pytorch.org/whl/cu113/torch_stable.html torchvision==0.13.1 - conda-forge::python=3.8 - conda-forge::setuptools=59.5.0 - conda-forge::pip diff --git a/requirements.txt b/requirements.txt index da2f8471..57b3ef08 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,2 +1,6 @@ +--find-links https://download.pytorch.org/whl/cu113/torch_stable.html +torch==1.12.1+cu113 +torchaudio==0.12.1+cu113 +torchvision==0.13.1 einops -colossalai \ No newline at end of file +colossalai==0.1.8