Skip to content

Commit

Permalink
Fix typo.
Browse files Browse the repository at this point in the history
  • Loading branch information
akshayka committed Nov 6, 2018
1 parent c03cfb3 commit 761ec2a
Show file tree
Hide file tree
Showing 2 changed files with 24 additions and 10 deletions.
18 changes: 13 additions & 5 deletions html/elghaoui2018lifted.html
Expand Up @@ -49,12 +49,20 @@ <h1 id="lifted-neural-networks-el-ghaoui-2018httpsarxivorgabs180501532"><a href=
problem; these representations are encoded in the problem of training a
neural network via penalties. The key to arriving at such a representation
is to <em>lift</em> the standard neural network optimization problem into a
higher-dimensional space by introducing for each layer a variable representing
its output layer, representing each activation function as a argmin of a
higher-dimensional space by</p>

<ol>
<li>introducing for each layer a variable representing
its output layer,</li>
<li>presenting each activation function as an argmin of a
divergence function that is convex in each argument (but not necessarily
jointly convex in both arguments), and by coercing the divergences to be small
via penalization. For this reason, El Ghaoui and his co-authors say that
instances of this family are “lifted” neural networks.</p>
jointly convex in both arguments), and</li>
<li>coercing the divergences to be small
via penalization.</li>
</ol>

<p>El Ghaoui and his co-authors refer to neural networks that have been
rewritten in this way as “lifted” neural networks.</p>

<p>The upshot: Any lifted neural network can be optimized in a block-coordinate,
gradient-free fashion using well-known algorithms for convex optimization,
Expand Down
16 changes: 11 additions & 5 deletions summaries/elghaoui2018lifted.md
Expand Up @@ -5,12 +5,18 @@ each activation function is represented as an argmin of a convex optimization
problem; these representations are encoded in the problem of training a
neural network via penalties. The key to arriving at such a representation
is to *lift* the standard neural network optimization problem into a
higher-dimensional space by introducing for each layer a variable representing
its output layer, representing each activation function as a argmin of a
higher-dimensional space by

1. introducing for each layer a variable representing
its output layer,
2. presenting each activation function as an argmin of a
divergence function that is convex in each argument (but not necessarily
jointly convex in both arguments), and by coercing the divergences to be small
via penalization. For this reason, El Ghaoui and his co-authors say that
instances of this family are "lifted" neural networks.
jointly convex in both arguments), and
3. coercing the divergences to be small
via penalization.

El Ghaoui and his co-authors refer to neural networks that have been
rewritten in this way as "lifted" neural networks.

The upshot: Any lifted neural network can be optimized in a block-coordinate,
gradient-free fashion using well-known algorithms for convex optimization,
Expand Down

0 comments on commit 761ec2a

Please sign in to comment.