Skip to content

Commit

Permalink
Fix links
Browse files Browse the repository at this point in the history
  • Loading branch information
akshayka committed Nov 6, 2018
1 parent 304d47c commit 9e44694
Show file tree
Hide file tree
Showing 9 changed files with 33 additions and 186 deletions.
15 changes: 8 additions & 7 deletions generate_index.py
@@ -1,7 +1,6 @@
import collections
import glob
import re
import pdb

PDF_META = 'PDF'
MD_WILDCARD = 'summaries/*.md'
Expand All @@ -10,12 +9,14 @@

md_files = sorted(glob.glob(MD_WILDCARD))
html_files = sorted(glob.glob(HTML_WILDCARD))
assert(len(md_files) == len(html_files))

ListItem = collections.namedtuple('ListItem', ['title', 'author', 'year', 'url'])
ListItem = collections.namedtuple(
'ListItem', ['title', 'author', 'year', 'url'])
list_items = []

for md_file, html_file in zip(md_files, html_files):
with open(md_file, 'rb') as f:
with open(md_file, 'r') as f:
title_line = f.readline()
match = re.search(TITLE_PATTERN, title_line)
title = match.group(1)
Expand All @@ -31,8 +32,8 @@

list_items = sorted(list_items, key=lambda t: t.year)

print '<ol>'
print('<ol>')
for list_item in list_items:
print '<li><a href="%s">%s <span class="year">(%s %d)</span></a></li>' % (
list_item.url, list_item.title, list_item.author, list_item.year)
print '</ol>'
print('<li><a href="%s">%s <span class="year">(%s %d)</span></a></li>' % (
list_item.url, list_item.title, list_item.author, list_item.year))
print('</ol>')
2 changes: 1 addition & 1 deletion html/chu2013socp-codegen.html
Expand Up @@ -52,7 +52,7 @@ <h2 id="novel-contributions">Novel contributions</h2>

<ol>
<li>SOCPs encompass a large class of convex programs; previous canonicalization
suites targeted smaller problem clases, like QPs or QCQPs. Recall
suites targeted smaller problem classes, like QPs or QCQPs. Recall
that linear programs, QPs, and QCQPs are all SOCPs.</li>
<li>Problem parameters must enter through specific functions; this interface
allows the code generator to circumvent all floating point operations.</li>
Expand Down
4 changes: 2 additions & 2 deletions html/dunning2016jump.html
Expand Up @@ -46,7 +46,7 @@ <h1 id="jump-a-modeling-language-for-mathematical-optimization-duninng-2016https

<p><strong>Nut graf</strong>: <a href="https://jump.readthedocs.io/en/latest/index.html">JuMP</a> is a
Julia-embedded modeling language for mathematical optimization that was written
with performance in mind; its salient features include automatic differentation
with performance in mind; its salient features include automatic differentiation
of user-defined functions, efficient parsing of updated problems, support for
user-defined problems and solve methods, and solver callbacks. JuMP targets
linear, mixed-integer, quadratic, conic-quadratic, semidefinite, and nonlinear
Expand All @@ -60,7 +60,7 @@ <h2 id="background">Background</h2>
a particular standard form, and it is the responsibility of the client to
express her problem in an acceptable fashion. It is often not
at all obvious to clients how to encode their problems using the solving
substrate's standard form; this encoding proces may require
substrate's standard form; this encoding proces may require
clever re-expressions of the original problem or onerous stuffing of
the problem's constraints into a rigid matrix structure.</p>

Expand Down
6 changes: 3 additions & 3 deletions html/gatys2015neuralstyle.html
Expand Up @@ -43,8 +43,8 @@ <h1 id="indextitle"><a class="title" href="../index.html">Papers</a></h1>
</p>

<h1 id="a-neural-algorithm-of-artistic-style-gatys-2015httpsarxivorgabs150806576"><a href="https://arxiv.org/abs/1508.06576">A Neural Algorithm of Artistic Style (Gatys 2015)</a></h1>
<p>Gatys et al. present a way to repurpose a deep network trained for image
classification to redraw images in the style of some reference image.
<p>Gatys et al. present a way to repurpose a deep network trained for image
classification to redraw images in the style of some reference image.
For example, this method can be used to render arbitrary photographs in the
style of Van Gogh’s <em>The Starry Night</em>.</p>

Expand Down Expand Up @@ -75,7 +75,7 @@ <h3 id="the-algorithm">The Algorithm</h3>
\end{array}
\end{equation*} %]]></script>

<p>where <script type="math/tex">\ell</script> is fixed, <script type="math/tex">w_l</script> is a weight that incorporates the the size
<p>where <script type="math/tex">\ell</script> is fixed, <script type="math/tex">w_l</script> is a weight that incorporates the size
of layer (see the paper for details), and <script type="math/tex">B</script> is the optimization variable.
Gram matrices are used above to capture correlations between the features
within each layer. The <script type="math/tex">\alpha</script>-weighted expression measures the similarity
Expand Down
4 changes: 2 additions & 2 deletions html/gubin1966projections.html
Expand Up @@ -199,12 +199,12 @@ <h3 id="an-acceleration-scheme-for-alternating-projections">5. An Acceleration S

<p>John Von Neumann. The Geometry of Orthogonal Spaces. <em>Functional Operators
(AM-22), Vol. II.</em> Princeton University Press, 1950. Reprint of lecture
notes originally compiled in 1933. <a href="#fnref:1" class="reversefootnote">&#8617;</a></p>
notes originally compiled in 1933. <a href="#fnref:1" class="reversefootnote">&#8617;</a></p>
</li>
<li id="fn:2">

<p>Boyd, S., &amp; Dattorro, J. (2003). Alternating projections. <em>EE392o, Stanford
University.</em> <a href="#fnref:2" class="reversefootnote">&#8617;</a></p>
University.</em> <a href="#fnref:2" class="reversefootnote">&#8617;</a></p>
</li>
</ol>
</div>
Expand Down
165 changes: 0 additions & 165 deletions html/gubin1996projections.html

This file was deleted.

10 changes: 5 additions & 5 deletions index.html
Expand Up @@ -18,20 +18,20 @@ <h1 id="indextitle">Papers</h1>

<ol>
<li><a href="html/gubin1966projections.html">The Method of Projections for Finding the Common Point of Convex Sets <span class="year">(Gubin 1966)</span></a></li>
<li><a href="html/lipton2017.html">YALMIP: A Toolbox for Modeling and Optimization in MATLAB <span class="year">(Lofberg 2004)</span></a></li>
<li><a href="html/lofberg2004yalmip.html">YALMIP: A Toolbox for Modeling and Optimization in MATLAB <span class="year">(Lofberg 2004)</span></a></li>
<li><a href="html/grant2008graph.html">Graph Implementations for Nonsmooth Convex Programs <span class="year">(Grant 2008)</span></a></li>
<li><a href="html/chu2013socp-codegen.html">Code Generation for Embedded Second-Order Cone Programming <span class="year">(Chu 2013)</span></a></li>
<li><a href="html/gatys2015neuralstyle.html">A Neural Algorithm of Artistic Style <span class="year">(Gatys 2015)</span></a></li>
<li><a href="html/arora2016pmi-embeddings.html">A Latent Variable Model Approach to PMI-based Word Embeddings <span class="year">(Arora 2016)</span></a></li>
<li><a href="html/diamond2016cvxpy.html">CVXPY: A Python-Embedded Modeling Language for Convex Optimization <span class="year">(Diamond 2016)</span></a></li>
<li><a href="html/dunning2016jump.html">JuMP: A Modeling Language for Mathematical Optimization <span class="year">(Duninng 2016)</span></a></li>
<li><a href="html/gubin1996projections.html">Train Faster, Generalize Better: Stability of Stochastic Gradient Descent <span class="year">(Hardt 2016)</span></a></li>
<li><a href="html/lofberg2004yalmip.html">TensorFlow: A System for Large-Scale Machine Learning <span class="year">(Mongat 2016)</span></a></li>
<li><a href="html/hardt2016sgd-stability.html">Train Faster, Generalize Better: Stability of Stochastic Gradient Descent <span class="year">(Hardt 2016)</span></a></li>
<li><a href="html/monga2017tensorflow.html">TensorFlow: A System for Large-Scale Machine Learning <span class="year">(Mongat 2016)</span></a></li>
<li><a href="pdf/odonoghue2016scs.pdf">Conic Optimization via Operator Splitting and Homogeneous Self-Dual Embedding <span class="year">(O'Donoghue 2016)</span></a></li>
<li><a href="html/amos2017optnet.html">OptNet: Differentiable Optimization as a Layer in Neural Networks <span class="year">(Amos 2017)</span></a></li>
<li><a href="html/arora2017sentence-embeddings.html">A Simple but Tough-to-Beat Baseline for Sentence Embeddings <span class="year">(Arora 2017)</span></a></li>
<li><a href="html/hardt2016sgd-stability.html">Occupy the Cloud: Distributed Computing for the 99% <span class="year">(Jonas 2017)</span></a></li>
<li><a href="html/jonas2017pywren.html">The Mythos of Model Interpretability <span class="year">(Lipton 2017)</span></a></li>
<li><a href="html/jonas2017pywren.html">Occupy the Cloud: Distributed Computing for the 99% <span class="year">(Jonas 2017)</span></a></li>
<li><a href="html/lipton2017.html">The Mythos of Model Interpretability <span class="year">(Lipton 2017)</span></a></li>
</ol>
</div>
</body>
Expand Down
11 changes: 11 additions & 0 deletions reading_list.md
Expand Up @@ -45,3 +45,14 @@ Recent Advances and Limitations
* Lifted Neural Networks (El Ghaoui)
* Minimizing Finite Sums with the Stochastic Average Gradient (Shmidt)
* How to escape saddle points efficiently (jordan)
* Local Minima and Convergence in Low-Rank Semidefinite Programming (Burer, Monteiro)
* The non-convex Burer-Monteiro approach works on smooth semidefinite programs (Boumal)
* Fast Exact Multiplication by the Hessian (Pearlmutter 1994)
* The geometry of graphs and some of its algorithmic applications [LLR '94]
* Expander Flows, Geometric Embeddings and Graph Partitioning [Arora 09]
* Representation Tradeoffs for Hyperbolic Embeddings
* improving distributional similarity with lessons learned from word embeddings
* A La Carte Embedding:
Cheap but Effective Induction of Semantic Feature Vectors
* A CONVERGENCE
ANALYSIS OF GRADIENT DESCENT FOR DEEP LINEAR NEURAL NETWORKS
2 changes: 1 addition & 1 deletion style.css
Expand Up @@ -19,7 +19,7 @@ body {
color: #333;
margin: 0;
padding: 0;
font-size: 18px;
font-size: 14px;
}

blockquote {
Expand Down

0 comments on commit 9e44694

Please sign in to comment.