Skip to content

hpcaitech/FastFold

main
Switch branches/tags
Code

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

FastFold

GitHub license

Optimizing Protein Structure Prediction Model Training and Inference on GPU Clusters

FastFold provides a high-performance implementation of Evoformer with the following characteristics.

  1. Excellent kernel performance on GPU platform
  2. Supporting Dynamic Axial Parallelism(DAP)
    • Break the memory limit of single GPU and reduce the overall training time
    • DAP can significantly speed up inference and make ultra-long sequence inference possible
  3. Ease of use
    • Huge performance gains with a few lines changes
    • You don't need to care about how the parallel part is implemented

Installation

You will need Python 3.8 or later and NVIDIA CUDA 11.1 or above when you are installing from source.

We highly recommend installing an Anaconda or Miniconda environment and install PyTorch with conda:

conda env create --name=fastfold -f environment.yml
conda activate fastfold
bash scripts/patch_openmm.sh

You can get the FastFold source and install it with setuptools:

git clone https://github.com/hpcaitech/FastFold
cd FastFold
python setup.py install

Usage

You can use Evoformer as nn.Module in your project after from fastfold.model.fastnn import Evoformer:

from fastfold.model.fastnn import Evoformer
evoformer_layer = Evoformer()

If you want to use Dynamic Axial Parallelism, add a line of initialize with fastfold.distributed.init_dap.

from fastfold.distributed import init_dap

init_dap(args.dap_size)

Inference

You can use FastFold with inject_fastnn. This will replace the evoformer from OpenFold with the high performance evoformer from FastFold.

from fastfold.utils import inject_fastnn

model = AlphaFold(config)
import_jax_weights_(model, args.param_path, version=args.model_name)

model = inject_fastnn(model)

For Dynamic Axial Parallelism, you can refer to ./inference.py. Here is an example of 2 GPUs parallel inference:

torchrun --nproc_per_node=2 inference.py target.fasta data/pdb_mmcif/mmcif_files/ \
    --output_dir ./ \
    --uniref90_database_path data/uniref90/uniref90.fasta \
    --mgnify_database_path data/mgnify/mgy_clusters_2018_12.fa \
    --pdb70_database_path data/pdb70/pdb70 \
    --uniclust30_database_path data/uniclust30/uniclust30_2018_08/uniclust30_2018_08 \
    --bfd_database_path data/bfd/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt \
    --jackhmmer_binary_path `which jackhmmer` \
    --hhblits_binary_path `which hhblits` \
    --hhsearch_binary_path `which hhsearch` \
    --kalign_binary_path `which kalign`

Performance Benchmark

We have included a performance benchmark script in ./benchmark. You can benchmark the performance of Evoformer using different settings.

cd ./benchmark
torchrun --nproc_per_node=1 perf.py --msa-length 128 --res-length 256

Benchmark Dynamic Axial Parallelism with 2 GPUs:

cd ./benchmark
torchrun --nproc_per_node=2 perf.py --msa-length 128 --res-length 256 --dap-size 2

If you want to benchmark with OpenFold, you need to install OpenFold first and benchmark with option --openfold:

torchrun --nproc_per_node=1 perf.py --msa-length 128 --res-length 256 --openfold

Cite us

Cite this paper, if you use FastFold in your research publication.

@misc{cheng2022fastfold,
      title={FastFold: Reducing AlphaFold Training Time from 11 Days to 67 Hours}, 
      author={Shenggan Cheng and Ruidong Wu and Zhongming Yu and Binrui Li and Xiwen Zhang and Jian Peng and Yang You},
      year={2022},
      eprint={2203.00854},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}