Skip to content

Commit

Permalink
Updating Geometry
Browse files Browse the repository at this point in the history
  • Loading branch information
kdavis-mozilla committed Dec 2, 2019
1 parent 7d96540 commit f75b9cc
Show file tree
Hide file tree
Showing 2 changed files with 10 additions and 26 deletions.
36 changes: 10 additions & 26 deletions doc/Geometry.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ n_steps
-------
The network views each speech sample as a sequence of time-slices :math:`x^{(i)}_t` of
length :math:`T^{(i)}`. As the speech samples vary in length, we know that :math:`T^{(i)}`
need not equal :math:`T^{(j)}` for :math:`i \ne j`. For each batch, BRNN in TensorFlow needs
need not equal :math:`T^{(j)}` for :math:`i \ne j`. For each batch, RNN in TensorFlow needs
to know ``n_steps`` which is the maximum :math:`T^{(i)}` for the batch.

n_input
Expand All @@ -17,14 +17,14 @@ time-slice of the speech sample. We will make the number of MFCC features
dependent upon the sample rate of the data set. Generically, if the sample rate
is 8kHz we use 13 features. If the sample rate is 16kHz we use 26 features...
We capture the dimension of these vectors, equivalently the number of MFCC
features, in the variable ``n_input``.
features, in the variable ``n_input``. By default ``n_input`` is 26.

n_context
---------
As previously mentioned, the BRNN is not simply fed the MFCC features of a given
time-slice. It is fed, in addition, a context of :math:`C \in \{5, 7, 9\}` frames on
As previously mentioned, the RNN is not simply fed the MFCC features of a given
time-slice. It is fed, in addition, a context of :math:`C` frames on
either side of the frame in question. The number of frames in this context is
captured in the variable ``n_context``.
captured in the variable ``n_context``. By default ``n_context`` is 9.

Next we will introduce constants that specify the geometry of some of the
non-recurrent layers of the network. We do this by simply specifying the number
Expand All @@ -36,20 +36,13 @@ n_hidden_1, n_hidden_2, n_hidden_5
of units in the second, and ``n_hidden_5`` the number in the fifth. We haven't
forgotten about the third or sixth layer. We will define their unit count below.

A LSTM BRNN consists of a pair of LSTM RNN's.
One LSTM RNN that works "forward in time":
The RNN consists of an LSTM RNN that works "forward in time":

.. image:: ../images/LSTM3-chain.png
:alt: Image shows a diagram of a recurrent neural network with LSTM cells, with arrows depicting the flow of data from earlier time steps to later timesteps within the RNN.

and a second LSTM RNN that works "backwards in time":

.. image:: ../images/LSTM3-chain-backwards.png
:alt: Image shows a diagram of a recurrent neural network with LSTM cells, this time with data flowing from later time steps to earlier timesteps within the RNN.

The dimension of the cell state, the upper line connecting subsequent LSTM units,
is independent of the input dimension and the same for both the forward and
backward LSTM RNN.
is independent of the input dimension.

n_cell_dim
----------
Expand All @@ -63,24 +56,15 @@ determined by ``n_cell_dim`` as follows

.. code:: python
n_hidden_3 = 2 * n_cell_dim
n_hidden_3 = n_cell_dim
n_character
n_hidden_6
-----------
The variable ``n_character`` will hold the number of characters in the target
The variable ``n_hidden_6`` will hold the number of characters in the target
language plus one, for the :math:`blank`.
For English it is the cardinality of the set

.. math::
\{a,b,c, . . . , z, space, apostrophe, blank\}
we referred to earlier.

n_hidden_6
----------
The number of units in the sixth layer is determined by ``n_character`` as follows:

.. code:: python
n_hidden_6 = n_character
Binary file removed images/LSTM3-chain-backwards.png
Binary file not shown.

0 comments on commit f75b9cc

Please sign in to comment.