Skip to content

A distributed deep learning framework that supports flexible parallelization strategies.

License

Notifications You must be signed in to change notification settings

williamberman/FlexFlow

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FlexFlow

build gpu tests multinode gpu tests docker pip shell-check clang-format Documentation Status

FlexFlow is a deep learning framework that accelerates distributed DNN training by automatically searching for efficient parallelization strategies. FlexFlow provides a drop-in replacement for PyTorch and TensorFlow Keras. Running existing PyTorch and Keras programs in FlexFlow only requires a few lines of changes to the program.

Install FlexFlow

To install FlexFlow from source code, please read the instructions. If you would like to quickly try FlexFlow, we also provide pre-built Docker packages (flexflow-cuda with a CUDA backend, flexflow-hip_rocm with a HIP-ROCM backend) with all dependencies pre-installed (N.B.: currently, the CUDA pre-built containers are only fully compatible with host machines that have CUDA 11.7 installed), together with Dockerfiles if you wish to build the containers manually. You can also use conda to install the FlexFlow Python package (coming soon).

PyTorch Support

Users can also use FlexFlow to optimize the parallelization performance of existing PyTorch models in two steps. First, a PyTorch model can be exported to the FlexFlow model format using flexflow.torch.fx.torch_to_flexflow.

import torch
import flexflow.torch.fx as fx

model = MyPyTorchModule()
fx.torch_to_flexflow(model, "mymodel.ff")

Second, a FlexFlow program can directly import a previously saved PyTorch model and autotune the parallelization performance for a given parallel machine.

from flexflow.pytorch.model import PyTorchModel

def top_level_task():
  torch_model = PyTorchModel("mymodel.ff")
  output_tensor = torch_model.apply(ffmodel, input_tensor)
  ## Model compilation
  ffmodel.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
  ## Model training
  (x_train, y_train) = cifar10.load_data()
  ffmodel.fit(x_train, y_train, epochs=30)

More FlexFlow PyTorch examples: see the pytorch examples folder.

TensorFlow Keras and ONNX Support

FlexFlow prioritizes PyTorch compatibility, but also includes frontends for Tensorflow Keras and ONNX models.

C++ Interface

For users that prefer to program in C/C++. FlexFlow supports a C++ program inference that is equivalent to its Python APIs.

More FlexFlow C++ examples: see the C++ examples folder.

Command-Line Flags

In addition to setting runtime configurations in a FlexFlow Python/C++ program, the FlexFlow runtime also accepts command-line arguments for various runtime parameters:

FlexFlow training flags:

  • -e or --epochs: number of total epochs to run (default: 1)
  • -b or --batch-size: global batch size in each iteration (default: 64)
  • -p or --print-freq: print frequency (default: 10)
  • -d or --dataset: path to the training dataset. If not set, synthetic data is used to conduct training.

Legion runtime flags:

  • -ll:gpu: number of GPU processors to use on each node (default: 0)
  • -ll:fsize: size of device memory on each GPU (in MB)
  • -ll:zsize: size of zero-copy memory (pinned DRAM with direct GPU access) on each node (in MB). This is used for prefecthing training images from disk.
  • -ll:cpu: number of data loading workers (default: 4)
  • -ll:util: number of utility threads to create per process (default: 1)
  • -ll:bgwork: number of background worker threads to create per process (default: 1)

Performance auto-tuning flags:

  • --search-budget or --budget: the number of iterations for the MCMC search (default: 0)
  • --search-alpha or --alpha: a hyper-parameter for the search procedure (default: 0.05)
  • --export-strategy or --export: path to export the best discovered strategy (default: None)
  • --import-strategy or --import: path to import a previous saved strategy (default: None)
  • --enable-parameter-parallel: allow FlexFlow to explore parameter parallelism for performance auto-tuning. (By default FlexFlow only considers data and model parallelism.)
  • --enable-attribute-parallel: allow FlexFlow to explore attribute parallelism for performance auto-tuning. (By default FlexFlow only considers data and model parallelism.) For performance tuning related flags: see performance autotuning.

Contributing

Please let us know if you encounter any bugs or have any suggestions by submitting an issue.

We welcome all contributions to FlexFlow from bug fixes to new features and extensions.

Please subscribe to the FlexFlow users mailing list for

Citations

The Team

FlexFlow is developed and maintained by teams at CMU, Facebook, Los Alamos National Lab, MIT, and Stanford (alphabetically).

License

FlexFlow uses Apache License 2.0.

About

A distributed deep learning framework that supports flexible parallelization strategies.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • C++ 70.1%
  • Python 13.8%
  • Cuda 12.3%
  • CMake 1.3%
  • Shell 1.1%
  • C 0.9%
  • Other 0.5%