Skip to content

Commit

Permalink
use tf.tanh/tf.sigmoid and not tf.nn.*; revise docs/
Browse files Browse the repository at this point in the history
  • Loading branch information
dustinvtran committed Nov 13, 2016
1 parent 3b9cb36 commit 75dd23e
Show file tree
Hide file tree
Showing 16 changed files with 29 additions and 42 deletions.
4 changes: 2 additions & 2 deletions docs/README.md
@@ -1,12 +1,12 @@
# Edward website

The back end of our website depends on [pandoc](http://pandoc.org), [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/), and [sphinx](http://www.sphinx-doc.org). This lets us write stand-alone pages for documentation using LaTeX, with functional bibliographies. It also lets us auto-generate API documentation from the source code's docstrings. (We use NumPy's [docstring convention](https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt).)
The back end of our website depends on [pandoc](http://pandoc.org), [beautifulsoup](https://www.crummy.com/software/BeautifulSoup/), and [sphinx](http://www.sphinx-doc.org). This lets us write stand-alone pages for documentation using LaTeX, with functional bibliographies. It also lets us auto-generate API documentation from the source code's docstrings.

The front end of our website depends on [skeleton.css](http://getskeleton.com/), [Google Fonts](https://www.google.com/fonts), [highlight.js](https://highlightjs.org/), and [KaTeX](https://khan.github.io/KaTeX/).

## Editing the website

All stand-alone pages are under `docs/tex`. These compile to HTML pages in `docs`. Our custom pandoc html template is `docs/tex/template.pandoc`. Our APA styling for citations is in `docs/text/apa.csl`.
All stand-alone pages are under `docs/tex`. These compile to HTML pages. Our custom pandoc html template is `docs/tex/template.pandoc`. Our APA styling for citations is `docs/tex/apa.csl`.

## Building the website

Expand Down
9 changes: 5 additions & 4 deletions docs/tex/api/inference-compositionality.tex
Expand Up @@ -32,10 +32,11 @@ \subsubsection{Hybrid algorithms}
inference_m.update()
\end{lstlisting}

In \texttt{data}, we include bindings of prior latent
variables to posterior latent variables. This performs
conditional inference, where only a subset of the posterior is
inferred while the rest are fixed using other inferences.
In \texttt{data}, we include bindings of prior latent variables
(\texttt{z} or \texttt{beta}) to posterior latent variables
(\texttt{qz} or \texttt{qbeta}). This performs conditional inference,
where only a subset of the posterior is inferred while the rest are
fixed using other inferences.

This extends to many algorithms: for example,
exact EM for exponential families;
Expand Down
4 changes: 2 additions & 2 deletions docs/tex/api/model-compositionality.tex
Expand Up @@ -104,7 +104,7 @@ \subsubsection{Neural Networks}
whose output is $28*28$-dimensional. The output will be unconstrained,
parameterizing the logits of the Bernoulli likelihood.

In TensorFlow Slim, we write this model as follows:
With TensorFlow Slim, we write this model as follows:

\begin{lstlisting}[language=python]
from edward.models import Bernoulli, Normal
Expand All @@ -115,7 +115,7 @@ \subsubsection{Neural Networks}
x = Bernoulli(logits=slim.fully_connected(h, 28 * 28, activation_fn=None))
\end{lstlisting}

In Keras, we write this model as follows:
With Keras, we write this model as follows:

\begin{lstlisting}[language=python]
from edward.models import Bernoulli, Normal
Expand Down
2 changes: 1 addition & 1 deletion docs/tex/getting-started.tex
Expand Up @@ -54,7 +54,7 @@ \subsubsection{Your first Edward program}
b_1 = Normal(mu=tf.zeros(1), sigma=tf.ones(1))

x = tf.convert_to_tensor(x_train, dtype=tf.float32)
y = Normal(mu=tf.matmul(tf.nn.tanh(tf.matmul(x, W_0) + b_0), W_1) + b_1,
y = Normal(mu=tf.matmul(tf.tanh(tf.matmul(x, W_0) + b_0), W_1) + b_1,
sigma=0.1)
\end{lstlisting}

Expand Down
6 changes: 0 additions & 6 deletions docs/tex/tutorials/bayesian-linear-regression.tex
Expand Up @@ -50,11 +50,5 @@ \subsection{Bayesian Linear Regression}
An example script is available
\href{https://github.com/blei-lab/edward/blob/master/examples/bayesian_linear_regression.py}
{here}.
An example of the model implemented as a class object in TensorFlow is
available
\href{https://github.com/blei-lab/edward/blob/master/examples/tf_bayesian_linear_regression.py}
{here}, with visualization available
\href{https://github.com/blei-lab/edward/blob/master/examples/tf_bayesian_linear_regression_plot.py}
{here}.

\subsubsection{References}\label{references}
17 changes: 7 additions & 10 deletions docs/tex/tutorials/bayesian-neural-network.tex
Expand Up @@ -5,7 +5,9 @@ \subsection{Bayesian neural network}
A Bayesian neural network is a neural network with a prior
distribution on its weights \citep{neal2012bayesian}.

Define the likelihood of an observation $(\mathbf{x}_n, y_n)$
Consider a data set $\{(\mathbf{x}_n, y_n)\}$, where each data point
comprises of features $\mathbf{x}_n\in\mathbb{R}^D$ and output
$y_n\in\mathbb{R}$. Define the likelihood as
\begin{align*}
p(y_n \mid \mathbf{z}, \mathbf{x}_n, \sigma^2)
&=
Expand All @@ -22,13 +24,12 @@ \subsection{Bayesian neural network}
\text{Normal}(\mathbf{z} \mid \mathbf{0}, I).
\end{align*}

Let's build the model in Edward. We
instantiate a 3-layer Bayesian neural network with $\tanh$
nonlinearities.
Let's build the model in Edward. We define a 3-layer Bayesian neural
network with $\tanh$ nonlinearities.
\begin{lstlisting}[language=Python]
def neural_network(x):
h = tf.nn.tanh(tf.matmul(x, W_0) + b_0)
h = tf.nn.tanh(tf.matmul(h, W_1) + b_1)
h = tf.tanh(tf.matmul(x, W_0) + b_0)
h = tf.tanh(tf.matmul(h, W_1) + b_1)
h = tf.matmul(h, W_2) + b_2
return tf.reshape(h, [-1])

Expand Down Expand Up @@ -58,9 +59,5 @@ \subsection{Bayesian neural network}
Source code is available
\href{https://github.com/blei-lab/edward/blob/master/examples/bayesian_nn.py}
{here}.
An example of the model implemented as a class object in TensorFlow is
available
\href{https://github.com/blei-lab/edward/blob/master/examples/tf_bayesian_nn.py}
{here}.

\subsubsection{References}\label{references}
6 changes: 1 addition & 5 deletions docs/tex/tutorials/inference.tex
Expand Up @@ -44,11 +44,7 @@ \subsubsection{Inferring the posterior}
solution. Thus, calculating the posterior means \emph{approximating} the
posterior.

There are many approaches to posterior inference, which can be unwieldy to
manage or even conceptualize. Edward uses classes and class
inheritance to provide a hierarchy of inference methods, enabling fast
experimentation on top of existing methods. For details on the inference
hierarchy in Edward, see the
For details on how to specify a model in Edward, see the
\href{/api/inference}{inference API}. We describe several examples in
detail in the other inference \href{/tutorials/}{tutorials}.

Expand Down
3 changes: 1 addition & 2 deletions docs/tex/tutorials/variational-inference.tex
Expand Up @@ -35,8 +35,7 @@ \subsection{Variational inference}
models of data to best approximate the true process.

For details on the variational inference base class defined in Edward,
or the design of variational models, see the
\href{/api/inference}{inference API}.
see the \href{/api/inference}{inference API}.
For examples of specific variational inference algorithms in
Edward, see the other inference \href{/tutorials/}{tutorials}.

Expand Down
4 changes: 2 additions & 2 deletions examples/bayesian_nn.py
Expand Up @@ -28,8 +28,8 @@ def build_toy_dataset(N=40, noise_std=0.1):


def neural_network(x):
h = tf.nn.tanh(tf.matmul(x, W_0) + b_0)
h = tf.nn.tanh(tf.matmul(h, W_1) + b_1)
h = tf.tanh(tf.matmul(x, W_0) + b_0)
h = tf.tanh(tf.matmul(h, W_1) + b_1)
h = tf.matmul(h, W_2) + b_2
return tf.reshape(h, [-1])

Expand Down
2 changes: 1 addition & 1 deletion examples/beta_bernoulli_map.py
Expand Up @@ -21,7 +21,7 @@
x = Bernoulli(p=tf.ones(10) * p)

# INFERENCE
qp_params = tf.nn.sigmoid(tf.Variable(tf.random_normal([])))
qp_params = tf.sigmoid(tf.Variable(tf.random_normal([])))
qp = PointMass(params=qp_params)

data = {x: x_data}
Expand Down
2 changes: 1 addition & 1 deletion examples/getting_started_example.py
Expand Up @@ -27,7 +27,7 @@ def build_toy_dataset(N=50, noise_std=0.1):


def neural_network(x, W_0, W_1, b_0, b_1):
h = tf.nn.tanh(tf.matmul(x, W_0) + b_0)
h = tf.tanh(tf.matmul(x, W_0) + b_0)
h = tf.matmul(h, W_1) + b_1
return tf.reshape(h, [-1])

Expand Down
4 changes: 2 additions & 2 deletions examples/pp_stochastic_control_flow.py
Expand Up @@ -23,15 +23,15 @@ def geometric(p):
i = tf.constant(0)

def cond(i):
return tf.equal(tf.squeeze(Bernoulli(p=p)), tf.constant(1))
return tf.equal(Bernoulli(p=p), tf.constant(1))

def body(i):
return i + 1

return tf.while_loop(cond, body, loop_vars=[i])


p = tf.constant([0.9])
p = tf.constant(0.9)
geom = geometric(p)

sess = tf.Session()
Expand Down
2 changes: 1 addition & 1 deletion examples/tf_bayesian_nn.py
Expand Up @@ -42,7 +42,7 @@ class BayesianNN:
Standard deviation of the normal prior on weights; aka L2
regularization parameter, ridge penalty, scale parameter.
"""
def __init__(self, layer_sizes, nonlinearity=tf.nn.tanh,
def __init__(self, layer_sizes, nonlinearity=tf.tanh,
lik_std=0.1, prior_std=1.0):
self.layer_sizes = layer_sizes
self.nonlinearity = nonlinearity
Expand Down
2 changes: 1 addition & 1 deletion examples/tf_bayesian_nn_analytic_kl.py
Expand Up @@ -43,7 +43,7 @@ class BayesianNN:
Standard deviation of the normal prior on weights; aka L2
regularization parameter, ridge penalty, scale parameter.
"""
def __init__(self, layer_sizes, nonlinearity=tf.nn.tanh,
def __init__(self, layer_sizes, nonlinearity=tf.tanh,
lik_std=0.1, prior_std=1.0):
self.layer_sizes = layer_sizes
self.nonlinearity = nonlinearity
Expand Down
2 changes: 1 addition & 1 deletion examples/tf_bayesian_nn_separate_weights.py
Expand Up @@ -42,7 +42,7 @@ class BayesianNN:
Standard deviation of the normal prior on weights; aka L2
regularization parameter, ridge penalty, scale parameter.
"""
def __init__(self, layer_sizes, nonlinearity=tf.nn.tanh,
def __init__(self, layer_sizes, nonlinearity=tf.tanh,
lik_std=0.1, prior_std=1.0):
self.layer_sizes = layer_sizes
self.nonlinearity = nonlinearity
Expand Down
2 changes: 1 addition & 1 deletion examples/tf_bernoulli.py
Expand Up @@ -21,7 +21,7 @@ def log_prob(self, xs, zs):
ed.set_seed(42)
model = BernoulliPosterior()

qp_p = tf.nn.sigmoid(tf.Variable(tf.random_normal([])))
qp_p = tf.sigmoid(tf.Variable(tf.random_normal([])))
qp = Bernoulli(p=qp_p)

inference = ed.KLqp({'p': qp}, model_wrapper=model)
Expand Down

0 comments on commit 75dd23e

Please sign in to comment.