Skip to content

Commit

Permalink
reorganized files and ToC
Browse files Browse the repository at this point in the history
  • Loading branch information
bfortuner committed Apr 20, 2017
1 parent ff8cac3 commit 7100d4c
Show file tree
Hide file tree
Showing 9 changed files with 206 additions and 66 deletions.
44 changes: 44 additions & 0 deletions docs/activation_functions.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
.. _activation_functions:

====================
Activation functions
====================

.. toctree::
:maxdepth: 1
:titlesonly:

ELU
===

Be the first to contribute!


LeakyReLU
=========

Be the first to contribute!


ReLU
========

Be the first to contribute!


Sigmoid
=======

Be the first to contribute!


Softmax
=======

Be the first to contribute!


Tanh
====

Be the first to contribute!
26 changes: 8 additions & 18 deletions docs/algorithms.rst → docs/basics.rst
Original file line number Diff line number Diff line change
@@ -1,17 +1,14 @@
.. _algorithms:
.. _basics:

===============================
Algorithms
===============================

.. toctree::
:maxdepth: 1
======
Basics
======

Fundamental machine learning algorithms and concepts


Linear Regression
===================
Linear regression
=================

When a model's predicted output is continuous and has a constant slope.
At its most basic, it takes the form of:
Expand Down Expand Up @@ -52,15 +49,8 @@ References:
* <https://en.wikipedia.org/wiki/Linear_regression>


Logistic Regression
Logistic regression
===================

The bread and butter of neural networks is *affine transformations*: a
vector is received as input and is multiplied with a matrix to produce an
output (to which a bias vector is usually added before passing the result
through a nonlinearity). This is applicable to any type of input, be it an
image, a sound clip or an unordered collection of features: whatever their
dimensionality, their representation can always be flattened into a vector
before the transformation.

Be the first to contribute!

8 changes: 4 additions & 4 deletions docs/terms.rst → docs/glossary.rst
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
.. _terms:
.. _glossary:

===========
Terminology
===========
========
Glossary
========

Definitions of common machine learning terms

Expand Down
6 changes: 3 additions & 3 deletions docs/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,8 @@ Topics
.. toctree::
:maxdepth: 2

terms
algorithms
glossary
basics
math
nn
deep_learning
Expand All @@ -23,7 +23,7 @@ Topics
Contributing
============

Join the team! Check out our `github <http://github.com/bfortuner/ml-glossary/>`_ for more information.
Join the team! Check out our `github <http://github.com/bfortuner/ml-cheatsheet/>`_ for more information.

Indices and tables
==================
Expand Down
54 changes: 54 additions & 0 deletions docs/loss_functions.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
.. _loss_functions:

==============
Loss functions
==============


Cross-Entropy Loss
==================

Be the first to contribute!


Hinge Loss
==========

Be the first to contribute!


Kullback-Leibler divergence
===========================

Be the first to contribute!


L1 Loss
=======

Be the first to contribute!


L2 Loss
=======

Be the first to contribute!


Maximum Likelihood
==================

Be the first to contribute!


Mean Squared Error
==================

Be the first to contribute!




**References**

* http://rishy.github.io/ml/2015/07/28/l1-vs-l2-loss/
50 changes: 10 additions & 40 deletions docs/nn.rst
Original file line number Diff line number Diff line change
@@ -1,48 +1,18 @@
.. _nn:

===============================
===============
Neural networks
===============================

.. toctree::
:maxdepth: 1

Basic neural network concepts

Basics
======

Be the first to contribute!

Forward propagation
===================

.. literalinclude:: ../code/nn.py
:language: python
:pyobject: MyClass

Backpropagation
===============

Be the first to contribute!

Activation Functions
====================

Be the first to contribute!

Loss Functions
==============

Be the first to contribute!
Neural networks are a class of machine learning algorithms used to model complex patterns in datasets using multiple hidden layers and non-linear activation functions. A neural network takes an input, passes it through multiple layers of hidden neurons (mini-functions with unique coefficients that must be learned), and outputs a prediction representing the combined input of all the neurons. Neural networks are trained iteratively using optimization techniques like gradient descent. After each cycle of training, an error metric is calculated based on the difference between prediction and target. The derivatives of this error metric are calculated and propagated back through the network using a technique called backpropagation. Each neuron's coefficients (weights) are then adjusted relative to how much they contributed to the total error. This process is repeated iteratively until the network error drops below an acceptable threshold.

Optimizers
==========
** Topics **

Be the first to contribute!

Layers
======

Be the first to contribute!
.. toctree::
:maxdepth: 1
:titlesonly:

nn_concepts
activation_functions
loss_functions
optimizers
30 changes: 30 additions & 0 deletions docs/nn_concepts.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
.. _nn_concepts:

==============
Basic concepts
==============

Basic concepts in neural networks

Neurons
=======

Intro to neurons


Hidden Layers
=============

Explanation of how layers work


Forwardpropagation
==================

Intro to forwardpropagation


Backpropagation
===============

Intro to Backpropagation
52 changes: 52 additions & 0 deletions docs/optimizers.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
.. _optimizers:

==========
Optimizers
==========


Adadelta
========

Be the first to contribute!


Adagrad
========

Be the first to contribute!


Adam
====

Be the first to contribute!


L-BFGS
======

Be the first to contribute!


Momentum
========

Be the first to contribute!


RMSProp
=======

Be the first to contribute!


SGD
===

Be the first to contribute!


**References**

* http://sebastianruder.com/optimizing-gradient-descent/
2 changes: 1 addition & 1 deletion docs/resources.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ Papers

Be the first to contribute!

Libraries and Frameworks
Libraries and frameworks
========================

Be the first to contribute!
Expand Down

0 comments on commit 7100d4c

Please sign in to comment.