Fabric for Deep Learning (FfDL, pronounced fiddle) is a Deep Learning Platform offering TensorFlow, Caffe, PyTorch etc. as a Service on Kubernetes
Clone or download
Permalink
Failed to load latest commit information.
.github Move DCO signoff as part of the PR Template (#58) Apr 8, 2018
bin Update dind port forward script to be compatible with K8S client 1.9.… Nov 8, 2018
cli Sboag dlaas pull june14 (#97) Aug 1, 2018
commons Sboag dlaas pull june14 (#97) Aug 1, 2018
community Pytorch 1.0 patch (#137) Oct 1, 2018
dashboard Sboag dlaas pull june14 (#97) Aug 1, 2018
databroker Merge 20180514 1536 (#79) Jun 26, 2018
demos Fix Firefox file upload error in Fashion demo (#142) Oct 12, 2018
design Design details (#67) Apr 19, 2018
docs Minor proposals on paragraph #57 (#140) Oct 9, 2018
etc Travis CI: lint Python for syntax errors and undefined names (#147) Nov 8, 2018
jobmonitor Sboag dlaas pull june14 (#97) Aug 1, 2018
lcm Pytorch 1.0 patch (#137) Oct 1, 2018
metrics Sboag dlaas pull june14 (#97) Aug 1, 2018
persistentvol Merge 20180514 1536 (#79) Jun 26, 2018
restapi Sboag dlaas pull june14 (#97) Aug 1, 2018
samples Jupyter notebook using Adversarial Robustness Toolbox (#93) Jun 13, 2018
storage-plugin Make helm charts and scripts compatible to deploy FfDL on any namespa… Aug 6, 2018
templates Pytorch 1.0 patch (#137) Oct 1, 2018
trainer Sboag dlaas pull june14 (#97) Aug 1, 2018
.gitignore Sboag dlaas pull june14 (#97) Aug 1, 2018
.helmignore Sboag dlaas pull june14 (#97) Aug 1, 2018
.travis.yml Travis CI: lint Python for syntax errors and undefined names (#147) Nov 8, 2018
CONTRIBUTING.md Merge 20180514 1536 (#79) Jun 26, 2018
Chart.yaml Version control for prebuilt images (#122) Aug 3, 2018
LICENSE Update LICENSE and generate NOTICE file (#59) Apr 8, 2018
MAINTAINERS.md Merge 20180514 1536 (#79) Jun 26, 2018
Makefile Make helm charts and scripts compatible to deploy FfDL on any namespa… Aug 6, 2018
NOTICE Update LICENSE and generate NOTICE file (#59) Apr 8, 2018
README-cn.md add chinese readme file (#100) Jun 22, 2018
README.md PyTorch and ONNX support (#138) Oct 2, 2018
Vagrantfile initial release of FfDL Feb 8, 2018
env.txt Make helm charts and scripts compatible to deploy FfDL on any namespa… Aug 6, 2018
gitupdate.sh Merge 20180514 1536 (#79) Jun 26, 2018
glide.lock Sboag dlaas pull june14 (#97) Aug 1, 2018
glide.yaml Merge 20180514 1536 (#79) Jun 26, 2018
values.yaml Make helm charts and scripts compatible to deploy FfDL on any namespa… Aug 6, 2018

README.md

Read this in other languages: 中文.

build status

Fabric for Deep Learning (FfDL)

Latest: PyTorch 1.0 and ONNX support now in FfDL

This repository contains the core services of the FfDL (Fabric for Deep Learning) platform. FfDL is an operating system "fabric" for Deep Learning. It is a collaboration platform for:

  • Framework-independent training of Deep Learning models on distributed hardware
  • Open Deep Learning APIs
  • Running Deep Learning hosting in user's private or public cloud

ffdl-architecture

To know more about the architectural details, please read the design document. If you are looking for demos, slides, collaterals, blogs, webinars and other materials related to FfDL, please find them here

Prerequisites

Usage Scenarios

  • If you are getting started and want to setup your own FfDL deployment, please follow the steps below.
  • If you have a FfDL deployment up and running, you can jump to FfDL User Guide to use FfDL for training your deep learning models.
  • If you want to leverage Jupyter notebooks to launch training on your FfDL cluster, please follow these instructions
  • If you have FfDL confiugured to use GPUs, and want to train using GPUs, follow steps here
  • To invoke Adversarial Robustness Toolbox to find vulnerabilities in your models, follow the instructions here
  • To deploy your trained models, follow the integration guide with Seldon
  • If you have used FfDL to train your models, and want to use a GPU enabled public cloud hosted service for further training and serving, please follow instructions here to train and serve your models using Watson Studio Deep Learning service.

Steps

  1. Quick Start
  1. Test
  2. Monitoring
  3. Development
  4. Clean Up
  5. Troubleshooting
  6. References

1. Quick Start

There are multiple installation paths for installing FfDL into an existing Kubernetes cluster. Below are the steps for quick install. If you want to follow more detailed step by step instructions , please visit the detailed installation guide

If you are using bash shell, you can modify the necessary environment variables in env.txt and export all of them using the following commands

source env.txt
export $(cut -d= -f1 env.txt)

1.1 Installation using Kubeadm-DIND

If you have Kubeadm-DIND installed on your machine, use these commands to deploy the FfDL platform:

export VM_TYPE=dind
export PUBLIC_IP=localhost
export SHARED_VOLUME_STORAGE_CLASS="";
export NAMESPACE=default # If your namespace does not exist yet, please create the namespace `kubectl create namespace $NAMESPACE` before running the make commands below

make deploy-plugin
make quickstart-deploy

1.2 Installation using Kubernetes Cluster

To install FfDL to any proper Kubernetes cluster, make sure kubectl points to the right namespace, then deploy the platform services:

Note: For PUBLIC_IP, put down one of your Cluster Public IP that can access your Cluster's NodePorts. For IBM Cloud, you can get your Public IP with bx cs workers <cluster_name>.

export VM_TYPE=none
export PUBLIC_IP=<Cluster Public IP>
export NAMESPACE=default # If your namespace does not exist yet, please create the namespace `kubectl create namespace $NAMESPACE` before running the make commands below

# Change the storage class to what's available on your Cloud Kubernetes Cluster.
export SHARED_VOLUME_STORAGE_CLASS="ibmc-file-gold";

make deploy-plugin
make quickstart-deploy

2. Test

To submit a simple example training job that is included in this repo (see etc/examples folder):

make test-push-data-s3
make test-job-submit

3. Monitoring

The platform ships with a simple Grafana monitoring dashboard. The URL is printed out when running the deploy make target.

4. Development

Please refer to the developer guide for more details.

5. Clean Up

If you want to remove FfDL from your cluster, simply use the following commands.

helm delete $(helm list | grep ffdl | awk '{print $1}' | head -n 1)

If you want to remove the storage driver and pvc from your cluster, run:

kubectl delete pvc static-volume-1
helm delete $(helm list | grep ibmcloud-object-storage-plugin | awk '{print $1}' | head -n 1)

For Kubeadm-DIND, you need to kill your forwarded ports. Note that the below command will kill all the ports that are created with kubectl.

kill $(lsof -i | grep kubectl | awk '{printf $2 " " }')

6. Troubleshooting

  • FfDL has only been tested under Mac OS and Linux
  • If glide install fails with an error complaining about non-existing paths (e.g., "Without src, cannot continue"), make sure to follow the standard Go directory layout (see Prerequisites section).

  • To remove FfDL on your Cluster, simply run make undeploy

  • When using the FfDL CLI to train a model, make sure your directory path doesn't have slashes / at the end.

  • If your job is stuck in pending stage, you can try to redeploy the plugin with helm install storage-plugin --set dind=true,cloud=false for Kubeadm-DIND and helm install storage-plugin for general Kubernetes Cluster. Also, double check your training job manifest file to make sure you have the correct object storage credentials.

7. References

Based on IBM Research work in Deep Learning.