Skip to content
This repository has been archived by the owner on Aug 3, 2021. It is now read-only.

Commit

Permalink
Merge pull request #67 from NVIDIA/r0.2-18.05
Browse files Browse the repository at this point in the history
R0.2 18.05
  • Loading branch information
okuchaiev committed Apr 26, 2018
2 parents ee58975 + 1473663 commit 2dc4418
Show file tree
Hide file tree
Showing 151 changed files with 4,947 additions and 3,362 deletions.
2 changes: 1 addition & 1 deletion docs/html/.buildinfo
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
config: 7f389a56c917b2054b70b2ce14f8c266
config: 1ee9542b8ec513ff756273e948aebb11
tags: 645f666f9bcd5a90fca523b33c5a78b7
204 changes: 156 additions & 48 deletions docs/html/_modules/data/data_layer.html

Large diffs are not rendered by default.

279 changes: 153 additions & 126 deletions docs/html/_modules/data/speech2text.html

Large diffs are not rendered by default.

4 changes: 4 additions & 0 deletions docs/html/_modules/data/speech_utils.html
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@



<link rel="shortcut icon" href="../../_static/favicon.ico"/>




Expand Down Expand Up @@ -151,6 +153,8 @@
<h1>Source code for data.speech_utils</h1><div class="highlight"><pre>
<span></span><span class="c1"># Copyright (c) 2018 NVIDIA Corporation</span>
<span class="kn">from</span> <span class="nn">__future__</span> <span class="k">import</span> <span class="n">absolute_import</span><span class="p">,</span> <span class="n">division</span><span class="p">,</span> <span class="n">print_function</span>
<span class="kn">from</span> <span class="nn">__future__</span> <span class="k">import</span> <span class="n">unicode_literals</span>
<span class="kn">from</span> <span class="nn">six.moves</span> <span class="k">import</span> <span class="nb">range</span>

<span class="kn">import</span> <span class="nn">scipy.io.wavfile</span> <span class="k">as</span> <span class="nn">wave</span>
<span class="kn">import</span> <span class="nn">resampy</span> <span class="k">as</span> <span class="nn">rs</span>
Expand Down
66 changes: 27 additions & 39 deletions docs/html/_modules/data/text2text.html

Large diffs are not rendered by default.

87 changes: 50 additions & 37 deletions docs/html/_modules/decoders/decoder.html

Large diffs are not rendered by default.

112 changes: 84 additions & 28 deletions docs/html/_modules/decoders/fc_decoder.html

Large diffs are not rendered by default.

57 changes: 32 additions & 25 deletions docs/html/_modules/decoders/rnn_decoders.html

Large diffs are not rendered by default.

71 changes: 52 additions & 19 deletions docs/html/_modules/decoders/transformer_decoders.html

Large diffs are not rendered by default.

88 changes: 74 additions & 14 deletions docs/html/_modules/encoders/ds2_encoder.html

Large diffs are not rendered by default.

81 changes: 50 additions & 31 deletions docs/html/_modules/encoders/encoder.html

Large diffs are not rendered by default.

63 changes: 33 additions & 30 deletions docs/html/_modules/encoders/rnn_encoders.html

Large diffs are not rendered by default.

43 changes: 31 additions & 12 deletions docs/html/_modules/encoders/transformer_encoders.html
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@



<link rel="shortcut icon" href="../../_static/favicon.ico"/>




Expand Down Expand Up @@ -154,6 +156,9 @@ <h1>Source code for encoders.transformer_encoders</h1><div class="highlight"><pr
<span class="sd">Encoders based on Transformers arch from https://arxiv.org/abs/1706.03762</span>
<span class="sd">&quot;&quot;&quot;</span>
<span class="kn">from</span> <span class="nn">__future__</span> <span class="k">import</span> <span class="n">absolute_import</span><span class="p">,</span> <span class="n">division</span><span class="p">,</span> <span class="n">print_function</span>
<span class="kn">from</span> <span class="nn">__future__</span> <span class="k">import</span> <span class="n">unicode_literals</span>
<span class="kn">from</span> <span class="nn">six.moves</span> <span class="k">import</span> <span class="nb">range</span>

<span class="kn">import</span> <span class="nn">tensorflow</span> <span class="k">as</span> <span class="nn">tf</span>

<span class="kn">from</span> <span class="nn">.encoder</span> <span class="k">import</span> <span class="n">Encoder</span>
Expand All @@ -178,9 +183,11 @@ <h1>Source code for encoders.transformer_encoders</h1><div class="highlight"><pr
<span class="k">def</span> <span class="nf">get_optional_params</span><span class="p">():</span>
<span class="k">return</span> <span class="nb">dict</span><span class="p">(</span><span class="n">Encoder</span><span class="o">.</span><span class="n">get_optional_params</span><span class="p">(),</span> <span class="o">**</span><span class="p">{</span>
<span class="s1">&#39;encoder_drop_prob&#39;</span><span class="p">:</span> <span class="nb">float</span><span class="p">,</span>
<span class="s2">&quot;encoder_norm_type&quot;</span><span class="p">:</span> <span class="nb">str</span><span class="p">,</span>
<span class="p">})</span></div>

<div class="viewcode-block" id="TransformerEncoder.__init__"><a class="viewcode-back" href="../../api-docs/encoders.html#encoders.transformer_encoders.TransformerEncoder.__init__">[docs]</a> <span class="k">def</span> <span class="nf">__init__</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">params</span><span class="p">,</span>
<span class="n">model</span><span class="p">,</span>
<span class="n">name</span><span class="o">=</span><span class="s2">&quot;transformer_encoder&quot;</span><span class="p">,</span>
<span class="n">mode</span><span class="o">=</span><span class="s1">&#39;train&#39;</span><span class="p">):</span>
<span class="sd">&quot;&quot;&quot;</span>
Expand All @@ -194,13 +201,13 @@ <h1>Source code for encoders.transformer_encoders</h1><div class="highlight"><pr
<span class="sd"> ... add any cell-specific parameters here as well</span>
<span class="sd"> &quot;&quot;&quot;</span>
<span class="nb">super</span><span class="p">(</span><span class="n">TransformerEncoder</span><span class="p">,</span> <span class="bp">self</span><span class="p">)</span><span class="o">.</span><span class="fm">__init__</span><span class="p">(</span>
<span class="n">params</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="n">name</span><span class="p">,</span> <span class="n">mode</span><span class="o">=</span><span class="n">mode</span><span class="p">,</span>
<span class="n">params</span><span class="p">,</span> <span class="n">model</span><span class="p">,</span> <span class="n">name</span><span class="o">=</span><span class="n">name</span><span class="p">,</span> <span class="n">mode</span><span class="o">=</span><span class="n">mode</span><span class="p">,</span>
<span class="p">)</span>

<span class="bp">self</span><span class="o">.</span><span class="n">_drop_prob</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">params</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">&quot;encoder_drop_prob&quot;</span><span class="p">,</span> <span class="mf">0.0</span><span class="p">)</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_norm_type</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">params</span><span class="o">.</span><span class="n">get</span><span class="p">(</span><span class="s2">&quot;encoder_norm_type&quot;</span><span class="p">,</span> <span class="s1">&#39;layer_norm&#39;</span><span class="p">)</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">_mode</span> <span class="o">!=</span> <span class="s1">&#39;train&#39;</span><span class="p">:</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_drop_prob</span> <span class="o">=</span> <span class="mf">0.0</span>
<span class="bp">self</span><span class="o">.</span><span class="n">_batch_size</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">params</span><span class="p">[</span><span class="s1">&#39;batch_size_per_gpu&#39;</span><span class="p">]</span></div>
<span class="bp">self</span><span class="o">.</span><span class="n">_drop_prob</span> <span class="o">=</span> <span class="mf">0.0</span></div>

<span class="k">def</span> <span class="nf">_encode</span><span class="p">(</span><span class="bp">self</span><span class="p">,</span> <span class="n">input_dict</span><span class="p">):</span>
<span class="n">ffn_inner_dim</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">params</span><span class="p">[</span><span class="s2">&quot;ffn_inner_dim&quot;</span><span class="p">]</span>
Expand All @@ -211,15 +218,23 @@ <h1>Source code for encoders.transformer_encoders</h1><div class="highlight"><pr
<span class="bp">self</span><span class="o">.</span><span class="n">params</span><span class="p">[</span><span class="s1">&#39;src_vocab_size&#39;</span><span class="p">],</span>
<span class="bp">self</span><span class="o">.</span><span class="n">params</span><span class="p">[</span><span class="s1">&#39;d_model&#39;</span><span class="p">]],</span>
<span class="n">dtype</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">params</span><span class="p">[</span><span class="s1">&#39;dtype&#39;</span><span class="p">])</span>
<span class="k">if</span> <span class="bp">self</span><span class="o">.</span><span class="n">_mode</span> <span class="o">==</span> <span class="s1">&#39;train&#39;</span><span class="p">:</span>
<span class="n">training</span><span class="o">=</span> <span class="kc">True</span>
<span class="n">drop_prob</span> <span class="o">=</span> <span class="bp">self</span><span class="o">.</span><span class="n">_drop_prob</span>
<span class="k">else</span><span class="p">:</span>
<span class="n">training</span> <span class="o">=</span> <span class="kc">False</span>
<span class="n">drop_prob</span> <span class="o">=</span> <span class="mf">0.0</span>

<span class="n">embedded_inputs_with_pos</span><span class="p">,</span> <span class="n">bias</span> <span class="o">=</span> <span class="n">embed_and_maybe_add_position_signal</span><span class="p">(</span>
<span class="n">inpt</span><span class="o">=</span><span class="n">input_dict</span><span class="p">[</span><span class="s1">&#39;src_inputs&#39;</span><span class="p">],</span>
<span class="n">inpt</span><span class="o">=</span><span class="n">input_dict</span><span class="p">[</span><span class="s1">&#39;src_sequence&#39;</span><span class="p">],</span>
<span class="n">emb_W</span><span class="o">=</span><span class="n">enc_emb_w</span><span class="p">,</span>
<span class="n">num_timescales</span><span class="o">=</span><span class="nb">int</span><span class="p">(</span><span class="n">d_model</span><span class="o">/</span><span class="mi">2</span><span class="p">),</span>
<span class="n">heads</span><span class="o">=</span><span class="n">attention_heads</span><span class="p">)</span>

<span class="n">x</span> <span class="o">=</span> <span class="n">dropout_normalize_add_NTC</span><span class="p">(</span><span class="n">x</span><span class="o">=</span><span class="n">embedded_inputs_with_pos</span><span class="p">,</span>
<span class="n">drop_prob</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">_drop_prob</span><span class="p">)</span>
<span class="n">drop_prob</span><span class="o">=</span><span class="n">drop_prob</span><span class="p">,</span>
<span class="n">training</span><span class="o">=</span><span class="n">training</span><span class="p">,</span>
<span class="n">norm_type</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">_norm_type</span><span class="p">)</span>

<span class="k">for</span> <span class="n">block_ind</span> <span class="ow">in</span> <span class="nb">range</span><span class="p">(</span><span class="bp">self</span><span class="o">.</span><span class="n">params</span><span class="p">[</span><span class="s1">&#39;encoder_layers&#39;</span><span class="p">]):</span>
<span class="k">with</span> <span class="n">tf</span><span class="o">.</span><span class="n">variable_scope</span><span class="p">(</span><span class="s2">&quot;EncoderBlock_</span><span class="si">{}</span><span class="s2">&quot;</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="n">block_ind</span><span class="p">)):</span>
Expand All @@ -230,17 +245,21 @@ <h1>Source code for encoders.transformer_encoders</h1><div class="highlight"><pr
<span class="n">additional_bias</span><span class="o">=</span><span class="n">bias</span><span class="p">)</span>

<span class="n">ff_input</span> <span class="o">=</span> <span class="n">dropout_normalize_add_NTC</span><span class="p">(</span><span class="n">x</span><span class="o">=</span><span class="n">att_out</span><span class="p">,</span> <span class="n">residual_x</span><span class="o">=</span><span class="n">x</span><span class="p">,</span>
<span class="n">drop_prob</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">_drop_prob</span><span class="p">)</span>
<span class="n">drop_prob</span><span class="o">=</span><span class="n">drop_prob</span><span class="p">,</span>
<span class="n">training</span><span class="o">=</span><span class="n">training</span><span class="p">,</span>
<span class="n">norm_type</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">_norm_type</span><span class="p">)</span>

<span class="n">x</span> <span class="o">=</span> <span class="n">ffn_and_layer_norm</span><span class="p">(</span><span class="n">inpt</span><span class="o">=</span><span class="n">ff_input</span><span class="p">,</span>
<span class="n">x</span> <span class="o">=</span> <span class="n">ffn_and_layer_norm</span><span class="p">(</span><span class="n">inputs</span><span class="o">=</span><span class="n">ff_input</span><span class="p">,</span>
<span class="n">inner_dim</span><span class="o">=</span><span class="n">ffn_inner_dim</span><span class="p">,</span>
<span class="n">resulting_dim</span><span class="o">=</span><span class="n">d_model</span><span class="p">,</span>
<span class="n">drop_prob</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">_drop_prob</span><span class="p">)</span>
<span class="k">return</span> <span class="p">{</span><span class="s1">&#39;encoder_outputs&#39;</span><span class="p">:</span> <span class="n">x</span><span class="p">,</span>
<span class="s1">&#39;encoder_state&#39;</span><span class="p">:</span> <span class="kc">None</span><span class="p">,</span>
<span class="s1">&#39;src_lengths&#39;</span><span class="p">:</span> <span class="n">input_dict</span><span class="p">[</span><span class="s1">&#39;src_lengths&#39;</span><span class="p">],</span>
<span class="n">drop_prob</span><span class="o">=</span><span class="n">drop_prob</span><span class="p">,</span>
<span class="n">training</span><span class="o">=</span><span class="n">training</span><span class="p">,</span>
<span class="n">norm_type</span><span class="o">=</span><span class="bp">self</span><span class="o">.</span><span class="n">_norm_type</span><span class="p">)</span>
<span class="k">return</span> <span class="p">{</span><span class="s1">&#39;outputs&#39;</span><span class="p">:</span> <span class="n">x</span><span class="p">,</span>
<span class="s1">&#39;state&#39;</span><span class="p">:</span> <span class="kc">None</span><span class="p">,</span>
<span class="s1">&#39;src_lengths&#39;</span><span class="p">:</span> <span class="n">input_dict</span><span class="p">[</span><span class="s1">&#39;src_length&#39;</span><span class="p">],</span>
<span class="s1">&#39;enc_emb_w&#39;</span><span class="p">:</span> <span class="n">enc_emb_w</span><span class="p">,</span>
<span class="s1">&#39;encoder_input&#39;</span><span class="p">:</span> <span class="n">input_dict</span><span class="p">[</span><span class="s1">&#39;src_inputs&#39;</span><span class="p">]}</span></div>
<span class="s1">&#39;encoder_input&#39;</span><span class="p">:</span> <span class="n">input_dict</span><span class="p">[</span><span class="s1">&#39;src_sequence&#39;</span><span class="p">]}</span></div>
</pre></div>

</div>
Expand Down
3 changes: 2 additions & 1 deletion docs/html/_modules/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,8 @@



<link rel="shortcut icon" href="../_static/favicon.ico"/>




Expand Down Expand Up @@ -181,7 +183,6 @@ <h1>All modules for which code is available</h1>
<li><a href="parts/utils.html">parts.utils</a></li>
<li><a href="utils/funcs.html">utils.funcs</a></li>
<li><a href="utils/hooks.html">utils.hooks</a></li>
<li><a href="utils/model_builders.html">utils.model_builders</a></li>
<li><a href="utils/utils.html">utils.utils</a></li>
</ul>

Expand Down
Loading

0 comments on commit 2dc4418

Please sign in to comment.