Skip to content

Commit

Permalink
Update loss_functions.rst
Browse files Browse the repository at this point in the history
  • Loading branch information
bfortuner committed Nov 29, 2017
1 parent 6ca6059 commit 62cc8f7
Showing 1 changed file with 44 additions and 6 deletions.
50 changes: 44 additions & 6 deletions docs/loss_functions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ The graph above shows the range of possible loss values given a true observation

.. note::

Cross-entropy and log loss are slightly different depending on context, but in machine learning when calculating error rates between 0 and 1 they resolve to the same thing.
Cross-entropy and log loss are slightly different depending on context, but in machine learning when calculating error rates between 0 and 1 they resolve to the same thing.

.. rubric:: Code

Expand Down Expand Up @@ -55,39 +55,73 @@ If :math:`M > 2` (i.e. multiclass classification), we calculate a separate loss
Hinge
=====

Be the first to `contribute! <https://github.com/bfortuner/ml-cheatsheet>`__
Used for classification.

.. rubric:: Code

.. literalinclude:: ../code/loss_functions.py
:pyobject: Hinge


.. _huber_loss:

Huber
=====

Typically used for regression. It's less sensitive to outliers than the MSE.

.. rubric:: Code

.. literalinclude:: ../code/loss_functions.py
:pyobject: Huber


.. _kl_divergence:

Kullback-Leibler
================

Be the first to `contribute! <https://github.com/bfortuner/ml-cheatsheet>`__
.. rubric:: Code

.. literalinclude:: ../code/loss_functions.py
:pyobject: KLDivergence


.. _l1_loss:

L1
=======

Be the first to `contribute! <https://github.com/bfortuner/ml-cheatsheet>`__
Excellent overview below [6] and [10].

.. rubric:: Code

.. literalinclude:: ../code/loss_functions.py
:pyobject: L2


.. _l2_loss:

L2
==

Be the first to `contribute! <https://github.com/bfortuner/ml-cheatsheet>`__
Excellent overview below [6] and [10].

.. rubric:: Code

.. literalinclude:: ../code/loss_functions.py
:pyobject: L2


.. _mle:

Maximum Likelihood
==================

Be the first to `contribute! <https://github.com/bfortuner/ml-cheatsheet>`__
.. rubric:: Code

.. literalinclude:: ../code/loss_functions.py
:pyobject: MLE


.. _mse:
Expand All @@ -114,3 +148,7 @@ Description of MSE...
.. [4] http://www.exegetic.biz/blog/2015/12/making-sense-logarithmic-loss/
.. [5] http://neuralnetworksanddeeplearning.com/chap3.html
.. [6] http://rishy.github.io/ml/2015/07/28/l1-vs-l2-loss/
.. [7] https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient
.. [8] https://en.wikipedia.org/wiki/Huber_loss
.. [9] https://en.wikipedia.org/wiki/Hinge_loss
.. [10] http://www.chioka.in/differences-between-l1-and-l2-as-loss-function-and-regularization/

0 comments on commit 62cc8f7

Please sign in to comment.