Experimental toolchain to compile and run Chainer models
Branch: master
Clone or download
Latest commit d4a292b Feb 20, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
ch2o Fix pooling ops without stride specified Feb 14, 2019
cmake Use CMake to prepare gsl-lite and optional-lite Nov 29, 2018
common
compiler Add support for float16 Feb 20, 2019
docker Pin the version of TVM Feb 4, 2019
docs Update the setup document Feb 20, 2019
elichika improved many things Feb 19, 2019
examples Update mnist example to use new module name Jan 16, 2019
feeder Change CMake build target names and ONNX namespace Jan 16, 2019
python Fix the paths in Python test Feb 13, 2019
runtime Use FloorDivide in XCVM's DivOp Feb 19, 2019
scripts Add support for float16 Feb 20, 2019
third_party Bump up ONNX version Feb 20, 2019
tools Add support for float16 Feb 20, 2019
.clang-format s/xChainer/ChainerX/ Jan 9, 2019
.gitignore Add toplevel .gitignore Jan 16, 2019
.gitmodules Move submodules to third_party directory Feb 13, 2019
.travis.yml Install Chainer ToT Feb 19, 2019
CMakeLists.txt Move submodules to third_party directory Feb 13, 2019
LICENSE Add LICENSE file Jan 15, 2019
README.md Add build status of travis in README.md Jan 25, 2019
setup.sh Use cmake to set up gtest Dec 4, 2018

README.md

Chainer compiler: experimental toolchain to compile and run Chainer models

Build Status

This is an experimental toolchain expected to be used with Chainer. This project aims to achieve a bunch of correlated goals such as

  • Make Python Chainer model deployable without Python runtime
  • Efficiently execute Chainer models with optimization techniques
  • Integrate Chainer with other systems or domain-specific chips
  • Be a playground to try algorithms for neural network frameworks

without sacrificing flexibility and coverage of Chainer.

To achieve these goals, this toolchain

  • Translates Python AST to extended ONNX. As this is a compiler rather than an execution tracer, it can export Python code with control-flows (e.g., LSTM with attention written by Python's loop)
  • Modifies the graph for optimization, auto-differentiation, etc. It then generates deployable code.
  • Runs the exported code with ChainerX's C++ API. Currently, the only backend it supports is a simple virtual machine implemented by ChainerX.

This project is still in the early stage and is not expected to be used by end-users. Interfaces can change quickly and some features may be abandoned. That said, it will be appreciated if you try this a bit and give us any feedbacks. Also, importantly, we are hiring! If you are interested in working on deep learning frameworks, please consider applying to Preferred Networks.

Documentation