Skip to content
Demonstrations and core sub-routines associated with "softened gradient" based learning algorithms.
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

Robust gradient descent via back-propagation: A Chainer-based tutorial

Here in this small repository, we provide a working example of a straightforward way to implement "robust gradient descent" learning algorithms for almost any neural network architecture using Chainer.

The core demonstration used in this tutorial is a numerical experiment evaluating the utility of robust gradient descent methods applied to neural networks, under the possibility of arbitrary outliers. This demo is included in the Jupyter notebook file:

In addition to the software in this library, we provide a step-by-step tutorial which attempts to bridge the gap between the code and the concepts:

The learning algorithm that we use as an example here is analyzed in detail in some of our research papers:


The above demo was tested using Python 3.6 and Chainer 5.3.0. The basic software required can be assembled in a convenient manner using conda. Assuming the user has conda installed, run the following.

$ conda update -n base conda
$ conda create -n chainrob python=3.6 scipy scikit-learn chainer jupyter pip matplotlib
$ conda activate chainrob
(chainrob) $ pip install Cython
(chainrob) $ pip install --ignore-installed --upgrade chainer
(chainrob) $ pip install environment_kernels

Additionally, working with graph visualizations in Chainer, the output is in a standardized graph data format, called "DOT", with extension .dot. To work with files of this form, the graphviz utility is extremely useful. First install using

$ sudo apt install graphviz

and then to actually get to work, execute the following commands

$ conda activate chainrob
(chainrob) $ jupyter notebook

and subsequently select demo.ipynb from the list of files shown in-browser. With that, all should be good to go.

Author and maintainer:
Matthew J. Holland (Osaka University, ISIR)

You can’t perform that action at this time.