Skip to content

Commit

Permalink
updated docs
Browse files Browse the repository at this point in the history
  • Loading branch information
Yurii Shevchuk committed Dec 2, 2018
1 parent 1eb8ead commit 8e98f94
Show file tree
Hide file tree
Showing 5 changed files with 32 additions and 18 deletions.
5 changes: 3 additions & 2 deletions site/2016/11/12/mnist_classification.rst
@@ -1,7 +1,7 @@
.. _mnist-classification:

MNIST Classification
====================
Image classification, MNIST digits
==================================

.. raw:: html

Expand Down Expand Up @@ -62,6 +62,7 @@ Notice the way division and subtraction are specified. In this way, we make upda

.. code-block:: python
>>> import numpy as np
>>> A = np.random.random((100, 10))
>>> id(A) # numbers will be different between runs
4486892960
Expand Down
7 changes: 7 additions & 0 deletions site/docs/algorithms/sklearn-compatibility.rst
Expand Up @@ -53,3 +53,10 @@ It's possible to use NeuPy in scikit-learn pipelines.
pipeline.fit(x_train, y_train, backpropagation__epochs=1000)
y_predict = pipeline.predict(x_test)
Issues
------

Not all features from scikit-learn library can be used with NeuPy. Copying of the networks and training algorithms cannot be done in a simple way and any function or class from scikit-learn that depends on the ``clone`` function will fail. For example, function like `cross_val_score <https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html>`_ will not work with NeuPy classes.

Also, copying neural network might not be enough, because weights from the network will be copied as well. And cross validation on the copied network won't show you exact performance, because network has been already pre-trained before it was copied.
2 changes: 2 additions & 0 deletions site/docs/algorithms/train-on-gpu.rst
Expand Up @@ -2,3 +2,5 @@ Train network on GPU
====================

NeuPy is based on the `Tensorfow <https://tensorflow.org/>`_ framework and it means that you can easily train neural networks with constructible architectures on GPU.

Training on GPU doesn't require any modifications in your code. If you want to use specific GPU among available options, you just need to use the ``tf.device`` context manager. See official documentation from tensorflow `here <https://www.tensorflow.org/api_docs/python/tf/device>`_
34 changes: 19 additions & 15 deletions site/docs/quickstart.rst
@@ -1,53 +1,57 @@
Quick start
===========

This guide provides basic overview of the NeuPy library.

Loading data
------------
This guide provides basic overview of the NeuPy library. For more detailed examples, see articles and examples available in the `tutorials <http://neupy.com/docs/tutorials.html>`_ page.

Building model
--------------

NeuPy provide very simple and intuitive interface for building neural networks. Simple architecture can be defined with a help of the inline operator.

.. code-block:: python
from neupy.layers import *
network = Input(24) > Relu(12) > Softmax(10)
Inline connection is a suitable choice for very small networks. Large networks can be defined with the help of the `join` operator.
Inline connection is a suitable choice for very small networks. Large networks can be defined with the help of the ``join`` operator.

.. code-block:: python
from neupy import layers
network = layers.join(
layers.Input(30),
layers.Relu(24),
layers.Softmax(10),
network = join(
Input(24),
Relu(12),
Softmax(10),
)
Inline connections can also improve readability in the large networks when only some groups of layers defined with the help of this operator. See :doc:`Subnetworks <layers/basics#subnetworks>` in order to learn more.
Inline connections can also improve readability in the large networks when only some groups of layers were defined with the help of this operator. See :doc:`Subnetworks <layers/basics#subnetworks>` in order to learn more.

Training
--------

Training can be done in two simple steps. First, we need to specify training algorithm. And second, we need to pass training data and specify number of training epochs. It can be done in two lines of code in NeuPy.

.. code-block:: python
from neupy import algorithms
optimizer = algorithms.Momentum(network, step=0.1)
optimizer = algorithms.Momentum(network, step=0.1, verbose=True)
optimizer.train(x_train, y_train, x_test, y_test, epochs=100)
Evaluation
----------

After the training, we can propagate test inputs through the network and get prediction.

.. code-block:: python
y_predicted = optimizer.predict(x_test)
What's next?
------------

There are available a few more tutorials that can help you to start working with NeuPy.
There are available a few more tutorials that can help you to start working with NeuPy. You can visit the `tutorials <http://neupy.com/docs/tutorials.html>`_ page or you can click on one of the links below.

* :ref:`mnist-classification`
* :ref:`boston-house-price`

Additional information about the library you can find in the documentation.
Additional information about the library you can find in the `documentation <file:///Users/projects/neupy/site/blog/html/pages/documentation.html>`_.
2 changes: 1 addition & 1 deletion site/docs/transfer-learning.rst
Expand Up @@ -16,7 +16,7 @@ When we load it by default it has randomly generated parameters. We can load pre
.. code-block:: python
from neupy import storage
storage.load_pickle(vgg16, '/path/to/vgg16.pickle')
storage.load(vgg16, '/path/to/vgg16.hdf5')
We can check what input and output shapes network expects.

Expand Down

0 comments on commit 8e98f94

Please sign in to comment.