Skip to content

Commit

Permalink
Mode docs
Browse files Browse the repository at this point in the history
  • Loading branch information
sgugger committed Dec 12, 2018
1 parent d22bc50 commit 8664a86
Show file tree
Hide file tree
Showing 7 changed files with 781 additions and 637 deletions.
327 changes: 266 additions & 61 deletions docs/torch_core.html

Large diffs are not rendered by default.

123 changes: 57 additions & 66 deletions docs/train.html
Original file line number Diff line number Diff line change
Expand Up @@ -63,13 +63,6 @@ <h4 id="fit_one_cycle"><code>fit_one_cycle</code><a href="https://github.com/fas
</div>
</div>

</div>
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>Fit a model with 1cycle training. See <a href="/callbacks.one_cycle.html#OneCycleScheduler"><code>OneCycleScheduler</code></a> for details.</p>

</div>
</div>
</div>
<div class="cell border-box-sizing code_cell rendered">

Expand All @@ -80,9 +73,9 @@ <h4 id="fit_one_cycle"><code>fit_one_cycle</code><a href="https://github.com/fas


<div class="output_markdown rendered_html output_subarea ">
<h4 id="lr_find"><code>lr_find</code><a href="https://github.com/fastai/fastai/blob/master/fastai/train.py#L23" class="source_link">[source]</a></h4><blockquote><p><code>lr_find</code>(<code>learn</code>:<a href="/basic_train.html#Learner"><code>Learner</code></a>, <code>start_lr</code>:<code>Floats</code>=<code>1e-07</code>, <code>end_lr</code>:<code>Floats</code>=<code>10</code>, <code>num_it</code>:<code>int</code>=<code>100</code>, <code>stop_div</code>:<code>bool</code>=<code>True</code>, <code>kwargs</code>:<code>Any</code>)</p>
<h4 id="one_cycle_scheduler"><code>one_cycle_scheduler</code><a href="https://github.com/fastai/fastai/blob/master/fastai/train.py#L9" class="source_link">[source]</a></h4><blockquote><p><code>one_cycle_scheduler</code>(<code>lr_max</code>:<code>float</code>, <code>kwargs</code>:<code>Any</code>) → <a href="/callbacks.one_cycle.html#OneCycleScheduler"><code>OneCycleScheduler</code></a></p>
</blockquote>
<p>Explore lr from <code>start_lr</code> to <code>end_lr</code> over <code>num_it</code> iterations in <code>learn</code>. If <code>stop_div</code>, stops when loss diverges.</p>
<p>Instantiate a <a href="/callbacks.one_cycle.html#OneCycleScheduler"><code>OneCycleScheduler</code></a> with <code>lr_max</code>.</p>

</div>

Expand All @@ -94,7 +87,7 @@ <h4 id="lr_find"><code>lr_find</code><a href="https://github.com/fastai/fastai/b
</div>
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>See <a href="/callbacks.lr_finder.html#LRFinder"><code>LRFinder</code></a> for details.</p>
<p>See <a href="/callbacks.one_cycle.html#OneCycleScheduler"><code>OneCycleScheduler</code></a> for details.</p>

</div>
</div>
Expand All @@ -108,9 +101,9 @@ <h4 id="lr_find"><code>lr_find</code><a href="https://github.com/fastai/fastai/b


<div class="output_markdown rendered_html output_subarea ">
<h4 id="to_fp16"><code>to_fp16</code><a href="https://github.com/fastai/fastai/blob/master/fastai/train.py#L33" class="source_link">[source]</a></h4><blockquote><p><code>to_fp16</code>(<code>learn</code>:<a href="/basic_train.html#Learner"><code>Learner</code></a>, <code>loss_scale</code>:<code>float</code>=<code>512.0</code>, <code>flat_master</code>:<code>bool</code>=<code>False</code>) → <a href="/basic_train.html#Learner"><code>Learner</code></a></p>
<h4 id="lr_find"><code>lr_find</code><a href="https://github.com/fastai/fastai/blob/master/fastai/train.py#L23" class="source_link">[source]</a></h4><blockquote><p><code>lr_find</code>(<code>learn</code>:<a href="/basic_train.html#Learner"><code>Learner</code></a>, <code>start_lr</code>:<code>Floats</code>=<code>1e-07</code>, <code>end_lr</code>:<code>Floats</code>=<code>10</code>, <code>num_it</code>:<code>int</code>=<code>100</code>, <code>stop_div</code>:<code>bool</code>=<code>True</code>, <code>kwargs</code>:<code>Any</code>)</p>
</blockquote>
<p>Put <code>learn</code> in FP16 precision mode.</p>
<p>Explore lr from <code>start_lr</code> to <code>end_lr</code> over <code>num_it</code> iterations in <code>learn</code>. If <code>stop_div</code>, stops when loss diverges.</p>

</div>

Expand All @@ -122,7 +115,7 @@ <h4 id="to_fp16"><code>to_fp16</code><a href="https://github.com/fastai/fastai/b
</div>
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>See <a href="/callbacks.fp16.html#MixedPrecision"><code>MixedPrecision</code></a> for details.</p>
<p>See <a href="/callbacks.lr_finder.html#LRFinder"><code>LRFinder</code></a> for details.</p>

</div>
</div>
Expand All @@ -136,9 +129,9 @@ <h4 id="to_fp16"><code>to_fp16</code><a href="https://github.com/fastai/fastai/b


<div class="output_markdown rendered_html output_subarea ">
<h4 id="mixup"><code>mixup</code><a href="https://github.com/fastai/fastai/blob/master/fastai/train.py#L40" class="source_link">[source]</a></h4><blockquote><p><code>mixup</code>(<code>learn</code>:<a href="/basic_train.html#Learner"><code>Learner</code></a>, <code>alpha</code>:<code>float</code>=<code>0.4</code>, <code>stack_x</code>:<code>bool</code>=<code>False</code>, <code>stack_y</code>:<code>bool</code>=<code>True</code>) → <a href="/basic_train.html#Learner"><code>Learner</code></a></p>
<h4 id="to_fp16"><code>to_fp16</code><a href="https://github.com/fastai/fastai/blob/master/fastai/train.py#L33" class="source_link">[source]</a></h4><blockquote><p><code>to_fp16</code>(<code>learn</code>:<a href="/basic_train.html#Learner"><code>Learner</code></a>, <code>loss_scale</code>:<code>float</code>=<code>512.0</code>, <code>flat_master</code>:<code>bool</code>=<code>False</code>) → <a href="/basic_train.html#Learner"><code>Learner</code></a></p>
</blockquote>
<p>Add mixup <a href="https://arxiv.org/abs/1710.09412">https://arxiv.org/abs/1710.09412</a> to <code>learn</code>.</p>
<p>Put <code>learn</code> in FP16 precision mode.</p>

</div>

Expand All @@ -150,14 +143,7 @@ <h4 id="mixup"><code>mixup</code><a href="https://github.com/fastai/fastai/blob/
</div>
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>See <a href="/callbacks.mixup.html#MixUpCallback"><code>MixUpCallback</code></a> for more details.</p>

</div>
</div>
</div>
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>A last extension method comes from the module tta.</p>
<p>See <a href="/callbacks.fp16.html#MixedPrecision"><code>MixedPrecision</code></a> for details.</p>

</div>
</div>
Expand All @@ -171,9 +157,9 @@ <h4 id="mixup"><code>mixup</code><a href="https://github.com/fastai/fastai/blob/


<div class="output_markdown rendered_html output_subarea ">
<h4 id="TTA"><code>TTA</code><a href="https://github.com/fastai/fastai/blob/master/fastai/vision/tta.py#L32" class="source_link">[source]</a></h4><blockquote><p><code>TTA</code>(<code>learn</code>:<a href="/basic_train.html#Learner"><code>Learner</code></a>, <code>beta</code>:<code>float</code>=<code>0.4</code>, <code>scale</code>:<code>float</code>=<code>1.35</code>, <code>ds_type</code>:<a href="/basic_data.html#DatasetType"><code>DatasetType</code></a>=<code>&lt;DatasetType.Valid: 2&gt;</code>, <code>with_loss</code>:<code>bool</code>=<code>False</code>) → <code>Tensors</code></p>
<h4 id="mixup"><code>mixup</code><a href="https://github.com/fastai/fastai/blob/master/fastai/train.py#L40" class="source_link">[source]</a></h4><blockquote><p><code>mixup</code>(<code>learn</code>:<a href="/basic_train.html#Learner"><code>Learner</code></a>, <code>alpha</code>:<code>float</code>=<code>0.4</code>, <code>stack_x</code>:<code>bool</code>=<code>False</code>, <code>stack_y</code>:<code>bool</code>=<code>True</code>) → <a href="/basic_train.html#Learner"><code>Learner</code></a></p>
</blockquote>
<p>Applies TTA to predict on <code>ds_type</code> dataset.</p>
<p>Add mixup <a href="https://arxiv.org/abs/1710.09412">https://arxiv.org/abs/1710.09412</a> to <code>learn</code>.</p>

</div>

Expand All @@ -185,14 +171,20 @@ <h4 id="TTA"><code>TTA</code><a href="https://github.com/fastai/fastai/blob/mast
</div>
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>Applies Test Time Augmentation to <code>learn</code> on the dataset <code>ds_type</code>. We take the average of our regular predictions (with a weight <code>beta</code>) with the average of predictions obtained thourh augmented versions of the training set (with a weight <code>1-beta</code>). The transforms decided for the training set are applied with a few changes <code>scale</code> controls the scale for zoom (which isn't random), the cropping isn't random but we make sure to get the four corners of the image. Flipping isn't random but applied once on each of those corner images (so that makes 8 augmented versions total).</p>
<p>See <a href="/callbacks.mixup.html#MixUpCallback"><code>MixUpCallback</code></a> for more details.</p>

</div>
</div>
</div>
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>We'll show examples below using our MNIST sample.</p>
<h2 id="Additional-callbacks">Additional callbacks<a class="anchor-link" href="#Additional-callbacks">&#182;</a></h2>
</div>
</div>
</div>
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>We'll show examples below using our MNIST sample. As usual the <code>on_something</code> methods are directly called by the fastai library, no need to call them yourself.</p>

</div>
</div>
Expand Down Expand Up @@ -220,7 +212,7 @@ <h4 id="TTA"><code>TTA</code><a href="https://github.com/fastai/fastai/blob/mast


<div class="output_markdown rendered_html output_subarea ">
<h2 id="ShowGraph"><code>class</code> <code>ShowGraph</code><a href="https://github.com/fastai/fastai/blob/master/fastai/train.py#L51" class="source_link">[source]</a></h2><blockquote><p><code>ShowGraph</code>(<code>learn</code>:<a href="/basic_train.html#Learner"><code>Learner</code></a>) :: <a href="/basic_train.html#LearnerCallback"><code>LearnerCallback</code></a></p>
<h3 id="ShowGraph"><code>class</code> <code>ShowGraph</code><a href="https://github.com/fastai/fastai/blob/master/fastai/train.py#L51" class="source_link">[source]</a></h3><blockquote><p><code>ShowGraph</code>(<code>learn</code>:<a href="/basic_train.html#Learner"><code>Learner</code></a>) :: <a href="/basic_train.html#LearnerCallback"><code>LearnerCallback</code></a></p>
</blockquote>
<p>Update a graph of learner stats and metrics after each epoch.</p>

Expand Down Expand Up @@ -259,6 +251,7 @@ <h2 id="ShowGraph"><code>class</code> <code>ShowGraph</code><a href="https://git
<div class="output_markdown rendered_html output_subarea ">
<h4 id="ShowGraph.on_epoch_end"><code>on_epoch_end</code><a href="https://github.com/fastai/fastai/blob/master/fastai/train.py#L53" class="source_link">[source]</a></h4><blockquote><p><code>on_epoch_end</code>(<code>n_epochs</code>:<code>int</code>, <code>last_metrics</code>:<code>MetricsList</code>, <code>kwargs</code>) → <code>bool</code></p>
</blockquote>
<p>If we have <code>last_metrics</code> plot them in our pbar graph</p>

</div>

Expand All @@ -267,13 +260,6 @@ <h4 id="ShowGraph.on_epoch_end"><code>on_epoch_end</code><a href="https://github
</div>
</div>

</div>
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>If we have <code>last_metrics</code>, plot them in <code>self.pbar</code>. Set the size of the graph with <code>n_epochs</code>.</p>

</div>
</div>
</div>
<div class="cell border-box-sizing code_cell rendered">

Expand All @@ -295,13 +281,6 @@ <h2 id="GradientClipping"><code>class</code> <code>GradientClipping</code><a hre
</div>
</div>

</div>
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>Clips gradient at a maximum absolute value of <code>clip</code> during training. For instance:</p>

</div>
</div>
</div>
<div class="cell border-box-sizing code_cell rendered">
<div class="input">
Expand All @@ -322,13 +301,25 @@ <h2 id="GradientClipping"><code>class</code> <code>GradientClipping</code><a hre

<div class="output_area">

<div class="output_subarea output_stream output_stdout output_text">
<pre>Total time: 00:11
epoch train loss valid loss accuracy
0 0.086958 0.038721 0.989696 (00:11)

</pre>
<div class="output_html rendered_html output_subarea ">
Total time: 00:11 <p><table style='width:300px; margin-bottom:10px'>
<tr>
<th>epoch</th>
<th>train_loss</th>
<th>valid_loss</th>
<th>accuracy</th>
</tr>
<tr>
<th>1</th>
<th>0.131133</th>
<th>0.078190</th>
<th>0.973013</th>
</tr>
</table>

</div>

</div>

</div>
Expand All @@ -346,6 +337,7 @@ <h2 id="GradientClipping"><code>class</code> <code>GradientClipping</code><a hre
<div class="output_markdown rendered_html output_subarea ">
<h4 id="GradientClipping.on_backward_end"><code>on_backward_end</code><a href="https://github.com/fastai/fastai/blob/master/fastai/train.py#L75" class="source_link">[source]</a></h4><blockquote><p><code>on_backward_end</code>(<code>kwargs</code>)</p>
</blockquote>
<p>Clip the gradient before the optimizer step.</p>

</div>

Expand All @@ -354,13 +346,6 @@ <h4 id="GradientClipping.on_backward_end"><code>on_backward_end</code><a href="h
</div>
</div>

</div>
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>Clip the gradients after they are computed but before the optimizer step.</p>

</div>
</div>
</div>
<div class="cell border-box-sizing code_cell rendered">

Expand Down Expand Up @@ -408,13 +393,25 @@ <h2 id="BnFreeze"><code>class</code> <code>BnFreeze</code><a href="https://githu

<div class="output_area">

<div class="output_subarea output_stream output_stdout output_text">
<pre>Total time: 00:07
epoch train loss valid loss accuracy
0 0.079278 0.041832 0.985280 (00:07)

</pre>
<div class="output_html rendered_html output_subarea ">
Total time: 00:07 <p><table style='width:300px; margin-bottom:10px'>
<tr>
<th>epoch</th>
<th>train_loss</th>
<th>valid_loss</th>
<th>accuracy</th>
</tr>
<tr>
<th>1</th>
<th>0.132564</th>
<th>0.078910</th>
<th>0.972031</th>
</tr>
</table>

</div>

</div>

</div>
Expand All @@ -432,6 +429,7 @@ <h2 id="BnFreeze"><code>class</code> <code>BnFreeze</code><a href="https://githu
<div class="output_markdown rendered_html output_subarea ">
<h4 id="BnFreeze.on_epoch_begin"><code>on_epoch_begin</code><a href="https://github.com/fastai/fastai/blob/master/fastai/train.py#L66" class="source_link">[source]</a></h4><blockquote><p><code>on_epoch_begin</code>(<code>kwargs</code>:<code>Any</code>)</p>
</blockquote>
<p>Put bn layers in eval mode just after <code>model.train()</code>.</p>

</div>

Expand All @@ -440,13 +438,6 @@ <h4 id="BnFreeze.on_epoch_begin"><code>on_epoch_begin</code><a href="https://git
</div>
</div>

</div>
<div class="cell border-box-sizing text_cell rendered"><div class="inner_cell">
<div class="text_cell_render border-box-sizing rendered_html">
<p>Set back the batchnorm layers on <code>eval</code> mode after the model has been set to <a href="/train.html#train"><code>train</code></a>.</p>

</div>
</div>
</div>
</div>

Expand Down
110 changes: 84 additions & 26 deletions docs/training.html

Large diffs are not rendered by default.

Loading

0 comments on commit 8664a86

Please sign in to comment.