Skip to content

Commit

Permalink
add docs for ZeroOneLoss
Browse files Browse the repository at this point in the history
  • Loading branch information
Evizero committed Nov 5, 2016
1 parent 5feac09 commit 4e955a0
Show file tree
Hide file tree
Showing 3 changed files with 24 additions and 3 deletions.
2 changes: 0 additions & 2 deletions docs/losses/interface.rst
Expand Up @@ -184,8 +184,6 @@ that can be queried by functions defined in *LearnBase.jl*.

.. function:: isclipable(loss)

.. function:: islipschitzcont_deriv(loss)

.. function:: ismarginbased(loss)

.. function:: isclasscalibrated(loss)
Expand Down
16 changes: 15 additions & 1 deletion docs/losses/margin.rst
Expand Up @@ -8,7 +8,21 @@ that are implemented in this package.

Margin-based Losses (Classification)

Note: The ZeroOneLoss itself is not margin-based
ZeroOneLoss
------------

.. class:: ZeroOneLoss

The classical classification loss. It penalizes every
missclassified observation with a loss of `1` while every
correctly classified observation has a loss of `0`.
It is not convex nor continuous and thus seldomly used directly.
Instead one usually works with some classification-calibrated
surrogate loss, such as one of those listed below.

.. math::
L(a) = \begin{cases} 1 & \quad \text{if } a < 0 \\ 0 & \quad \text{if } a >= 0\\ \end{cases}
PerceptronLoss
---------------
Expand Down
9 changes: 9 additions & 0 deletions src/supervised/margin.jl
Expand Up @@ -7,6 +7,15 @@
doc"""
ZeroOneLoss <: MarginLoss
The classical classification loss. It penalizes every missclassified
observation with a loss of `1` while every correctly classified
observation has a loss of `0`.
It is not convex nor continuous and thus seldomly used directly.
Instead one usually works with some classification-calibrated
surrogate loss, such as one of those listed below.
`L(a) = a < 0 ? 1 : 0`
Lossfunction Derivative
┌────────────┬────────────┐ ┌────────────┬────────────┐
1 │------------┐ │ 1 │ │
Expand Down

0 comments on commit 4e955a0

Please sign in to comment.