Switch branches/tags
Find file History
Tomcli and animeshsingh Pytorch 1.0 patch (#137)
* added pytorch distributed example draft

* use pytorch community image

* added experimental pytorch example

* update launch example

* add onnx example

* update c10d dist example

* update cuda method for gpu and group definition for c10d

* update c10d to based on SEP 11 build on PyTorch master.

* update custom pytorch version name

* update data parallelism examples

* update distributed training examples

* update pytorch c10d examples

* delete dummy file

* fix minor bugs

* update example to sep 25 build

* update readme

* update changes for distributed CPU job

* update multi-gpu code

* update multi-gpu code

* update world_size for multi-gpu senario

* update world_size for multi-gpu senario

* add seldon ngraph example, update data parallelism with multi gpu

* remove unnecessary device code

* added pytorch mpi core changes and seldon readme

* added pytorch mpi core changes and seldon readme

* update readme

* update example readme and remove old distributed example

* update c10d-paralleism example naming and readme.

* update readme with better naming and consistancy
Latest commit cc1deed Oct 1, 2018

README.md

Deploy FfDL Trained Models with Seldon

Seldon provides a deployment platform to allow machine learning models to be exposed via REST or gRPC endpoints. Runtime graphs of models, routers (e.g., AB-tests, Multi-Armed bandits) , transformers (e.g., Feature normalization) and combiners (e.g., ensemblers) can be described using a Custom Kubernetes Resource JSON/YAML and then deployed, scaled and managed.

Any FfDL model whose runtime inference can be packaged as a Docker container can be managed by Seldon.

Install Seldon

To install Seldon on your Kubernetes cluster next to FfDL see here.

Deployment Steps

To deploy your models on Seldon you need to

  1. Wrap your runtime inference components as Docker containers that follow the Seldon APIs
  2. Describe the runtime graph as a Custom Kubernetes SeldonDeployment Resource
  3. Apply the graph via the Kubernetes API, e.g. using kubectl

Examples