Skip to content

Commit

Permalink
added more to the tutorials on mlp
Browse files Browse the repository at this point in the history
  • Loading branch information
Ragav Venkatesan committed Feb 10, 2017
1 parent e8f8f63 commit a873c3d
Show file tree
Hide file tree
Showing 4 changed files with 41 additions and 11 deletions.
13 changes: 9 additions & 4 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -101,9 +101,9 @@ To install in a quick fashion without much dependencies run the follwing command
pip install git+git://github.com/ragavvenkatesan/yann.git
If there was an error with installing ``skdata``, you might want to install ``numpy`` and ``scipy``
independenyly first and then run the above command. Note that this installer, does not enable a lot
of options of the toolbox for which you need to go through the complete install described at
:ref:`setup`.
independently first and then run the above command. Note that this installer, does not enable a lot
of options of the toolbox for which you need to go through the complete install described at the
:ref:`setup` page.

The start and the end of Yann toolbox is the :mod:`network` module. The :mod:`yann.network`.
``network`` object is where all the magic happens. Start by importing :mod:`network` and creating a
Expand Down Expand Up @@ -171,7 +171,12 @@ constructed we can see that the ``net`` objects have ``layers`` populated.
The keys of the dictionary such as ``'1'``, ``'0'`` and ``'2'`` are the ``id`` of the layer. We
could have created a layer with a custom id by supplying an ``id`` argument to the ``add_layer``
method.
method. To get a better idea of how the network looks like, you can use the ``pretty_print`` mehtod
in yann.

.. code-block:: python
net.pretty_print()
Now our network is finally ready to be trained. Before training, we need to build an
:mod:`optimizer` and other tools, but for now let us use the default ones. Once all of this is done,
Expand Down
18 changes: 18 additions & 0 deletions docs/source/pantry/tutorials/matt2yann.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
.. _mat2yann:

Cooking a matlab dataset for Yann.
==================================

By virture of being here, it is assumed that you have gone through the :ref:`quick_start`.

.. Todo::

Code is done, but text needs to be written in.

The full code for this tutorial with additional commentary can be found in the file
``pantry.tutorials.mat2yann.py``. If you have toolbox cloned or downloaded or just the tutorials
downloaded, Run the code as,

.. automodule:: pantry.tutorials.matt2yann
:members:

20 changes: 13 additions & 7 deletions docs/source/pantry/tutorials/mlp.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,14 @@ of fully connected hidden layers. Hidden layers can be created using layer ``typ
origin ="input",
id = "dot_product_1",
num_neurons = 800,
activation ='relu')
regularize = True,
activation ='relu')
net.add_layer (type = "dot_product",
origin ="dot_product_1",
id = "dot_product_2",
num_neurons = 800,
regularize = True,
activation ='relu')
Notice the parameters passed. ``num_neurons`` is the number of nodes in the layer. Notice also
Expand All @@ -51,6 +54,7 @@ implemented. Let us now add a classifier and an objective layer to this.
origin = "softmax",
)
Again notice that we have supplied a lot more arguments than before. Refer the API for more details.
Let us create our own optimizer module this time instead of using the yann default. For any
``module`` in yann, the initialization can be done using the ``add_module`` method. The
``add_module`` method typically takes input ``type`` which in this case is ``optimizer`` and a set
Expand All @@ -70,7 +74,8 @@ options. A typical ``optimizer setup`` is:
net.add_module ( type = 'optimizer', params = optimizer_params )
We have now successfully added a Polyak momentum with RmsProp back propagation with some :math:`L_1`
and :math:`L2` norms. This optimizer will therefore solve the following error:
and :math:`L2` co-efficients that will be applied to the layers for which we passed as argument
``regularize = True``. This optimizer will therefore solve the following error:

.. math::
Expand Down Expand Up @@ -99,18 +104,19 @@ ith layer of the network. Once we are done, we can cook, train and test as usual
show_progress = True,
early_terminate = True)
The ``learning_rate``, supplied here is a tuple. The first indicates a annealing of a linear rate,
the second is the initial learning rate of the first era, and the third value is the leanring rate
of the second era. Accordingly, ``epochs`` takes in a tuple with number of epochs for each era.

This time, let us not let it run the forty epochs, let us cancel in the middle after some epochs
by hitting ^c. Once it stops lets immediately test and demonstrate that the ``net`` retains the
parameters as updated as possible. Once done, lets run ``net.test()``.

Some new arguments are introduced here and they are for the most part easy to understand in context.
``epoch`` represents a ``tuple`` which is the number of epochs of training and number of epochs of
fine tuning epochs after that. There could be several of these stages of finer tuning. Yann uses the
term 'era' to represent each set of epochs running with one learning rate.

``learning_rates`` indicates the leanring rates. The fist element of this learning rate is a
annealing parameter. ``learning_rates`` is naturally of length that is one higher than ``epochs``.
``show_progress`` will print a progress bar for each epoch. ``validate_after_epochs`` will perform
term 'era' to represent each set of epochs running with one learning rate. ``show_progress`` will
print a progress bar for each epoch. ``validate_after_epochs`` will perform
validation after such many epochs on a different validation dataset. The full code for this tutorial
with additional commentary can be found in the file ``pantry.tutorials.mlp.py``. If you have
toolbox cloned or downloaded or just the tutorials downloaded, Run the code as,
Expand Down
1 change: 1 addition & 0 deletions docs/source/tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ tutorial just in case though.
pantry/tutorials/autoencoder
pantry/tutorials/lenet
pantry/tutorials/gan
pantry/tutorials/matt2yann

.. Todo::

Expand Down

0 comments on commit a873c3d

Please sign in to comment.