Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add higher-order derivative support of Chainer to the comparison table #3477

Merged
merged 5 commits into from Mar 30, 2018
Merged
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
11 changes: 6 additions & 5 deletions docs/source/comparison.rst
Expand Up @@ -19,14 +19,14 @@ This table compares Chainer with other actively developed deep learning framewor
,"RNNs","full","full","full","full","partial","full","full","full","full","full","full","partial","partial","partial","partial"
,"Reverse-mode autograd","Y","Y","Y","Y",,"`torch-autograd <https://github.com/twitter/torch-autograd>`_","Y","Y","Y",,"Y","`ngraph <https://github.com/NervanaSystems/ngraph>`_","Y",,"with closures"
,"Forward-mode autograd",,,"`tensorflow-forward-ad <https://github.com/renmengye/tensorflow-forward-ad>`_","Y",,,,,,,,,,,
,"Higher-order grads",,"Y","Y","Y",,,,,,,,,"Y",,
,"Higher-order grads","Y [4]_","Y","Y","Y",,,,,,,,,"Y",,
,"Variable-length loops","native","native","while_loop","scan","RNNs only","native","2017","native","RNNs only","none","dynamic axis","none","native","none","native"
,"Different architectures per batch","native","native","`fold <https://github.com/tensorflow/fold>`_",,,"torch-autograd","`MinPy <https://github.com/dmlc/MinPy>`_","native",,,,,"native",,"native"
"Performance","cuDNN support","full","full","partial","partial","full","full","full","partial","full","partial","full","N/A [4]_",,"partial",
"Performance","cuDNN support","full","full","partial","partial","full","full","full","partial","full","partial","full","N/A [5]_",,"partial",
,"CPU/GPU generic backend","Y","Y",,,,"Y","Y","Y","Y","Y","Y","Y","Y",,"Y"
,"Multi-GPU data parallelism","Y","Y","Y","Y","Y","Y","Y",,"Y","Y","Y","Y","Y","Y",
,"Multi-GPU model parallelism","Y","Y","Y","Y","Y","Y","Y",,"Y",,"Y","Y",,,
,"Multiprocessing [5]_","full","partial",,,,,,"full",,,,,,,
,"Multiprocessing [6]_","full","partial",,,,,,"full",,,,,,,
,"Distributed training","`ChainerMN <https://github.com/chainer/chainermn>`_","THD","Y",,2017,"`torch-distlearn <https://github.com/twitter/torch-distlearn>`_","Y",,"Y","Spark","Y","Y",,,
"Misc","Runtime debugging","debug mode, typechecking, pdb","pdb","tfdbg",,,,"Monitor","pdb",,"Java debuggers","cntk.debugging",,"Gallium.jl","gdb","pdb"
,"Trainer abstraction","native","`tnt <https://github.com/pytorch/tnt>`_",,"`Blocks <https://github.com/mila-udem/blocks>`_, `Lasagne <https://github.com/Lasagne/Lasagne>`_, `Keras <https://github.com/fchollet/keras>`_","native","`torchnet <https://github.com/torchnet/torchnet>`_",,,"native","native","native","native",,,"native"
Expand All @@ -37,8 +37,9 @@ This table compares Chainer with other actively developed deep learning framewor
.. [1] Define-by-run is in development as of June 2017 and tracked in `dmlc/mxnet#5705 <https://github.com/dmlc/mxnet/pull/5705>`_. It is also possible using the much slower MinPy extension.
.. [2] Symbolic autograd is in development as of June 2017 and tracked in `deeplearning4j/nd4j#1750 <https://github.com/deeplearning4j/nd4j/pull/1750>`_.
.. [3] Symbolic autograd is available only with ngraph backend (experimental).
.. [4] Nervana provides kernels that are meant to compete with cuDNN.
.. [5] Multiprocessing provides a significant performance improvement only for frameworks that use Python at runtime.
.. [4] Some functions do not support higher-order differentiation. See `chainer/chainer#2970 <https://github.com/chainer/chainer/pull/2970>`_.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you replace this with #4449 ?

.. [5] Nervana provides kernels that are meant to compete with cuDNN.
.. [6] Multiprocessing provides a significant performance improvement only for frameworks that use Python at runtime.

Benchmarks
----------
Expand Down