Skip to content

Commit

Permalink
update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
ppwwyyxx committed Oct 29, 2017
1 parent 65e1fa4 commit a21eb14
Show file tree
Hide file tree
Showing 9 changed files with 39 additions and 35 deletions.
8 changes: 6 additions & 2 deletions docs/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
# documentation root, use os.path.abspath to make it absolute, like shown here.
sys.path.insert(0, os.path.abspath('../'))
os.environ['TENSORPACK_DOC_BUILDING'] = '1'
ON_RTD = (os.environ.get('READTHEDOCS') == 'True')


MOCK_MODULES = ['scipy', 'tabulate',
Expand Down Expand Up @@ -62,7 +63,7 @@
napoleon_numpy_docstring = False
napoleon_use_rtype = False

if os.environ.get('READTHEDOCS') == 'True':
if ON_RTD:
intersphinx_timeout = 10
else:
# skip this when building locally
Expand Down Expand Up @@ -386,7 +387,10 @@ def url_resolver(url):
if '.html' not in url:
return "https://github.com/ppwwyyxx/tensorpack/blob/master/" + url
else:
return "http://tensorpack.readthedocs.io/en/latest/" + url
if ON_RTD:
return "http://tensorpack.readthedocs.io/en/latest/" + url
else:
return '/' + url

def setup(app):
from recommonmark.transform import AutoStructify
Expand Down
2 changes: 1 addition & 1 deletion docs/tutorial/callback.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,5 +75,5 @@ These features may not be always useful, but think about how messy the main loop
were to write these logic together with the loops, and how easy your life will be if you could enable
these features with one line when you need them.

See [Write a callback](http://tensorpack.readthedocs.io/en/latest/tutorial/extend/callback.html)
See [Write a callback](extend/callback.html)
for details on how callbacks work, what they can do, and how to write them.
2 changes: 1 addition & 1 deletion docs/tutorial/efficient-dataflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,7 +155,7 @@ The above script builds a DataFlow which produces jpeg-encoded ImageNet data.
We store the jpeg string as a numpy array because the function `cv2.imdecode` later expect this format.
Please note we can only use 1 prefetch process to speed up. If `nr_proc>1`, `ds1` will take data
from several forks of `ds0`, then neither the content nor the order of `ds1` will be the same as `ds0`.
See [documentation](http://tensorpack.readthedocs.io/en/latest/modules/dataflow.html#tensorpack.dataflow.PrefetchDataZMQ)
See [documentation](../modules/dataflow.html#tensorpack.dataflow.PrefetchDataZMQ)
about caveats of `PrefetchDataZMQ`.

It will generate a database file of 140G. We build a DataFlow to read this LMDB file sequentially:
Expand Down
29 changes: 14 additions & 15 deletions docs/tutorial/extend/callback.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,14 +26,23 @@ You can override any of the following methods to define a new callback:

* `_setup_graph(self)`

Setup the ops / tensors in the graph which you might need to use in the callback. You can use
Setup the ops / tensors in the graph which you might need to use in the callback.
You can use TF methods such as
[`graph.get_tensor_by_name`](https://www.tensorflow.org/api_docs/python/tf/Graph#get_tensor_by_name)
to access those already defined in the training tower.
Or use
[`self.trainer.get_predictor(..)`](http://tensorpack.readthedocs.io/en/latest/modules/train.html?highlight=get_predictor#tensorpack.train.Trainer.get_predictor)
to create a callable evaluation function in the predict tower.

This method is to separate between "define" and "run", and also to avoid the common mistake to create ops inside
If you're using a `TowerTrainer` instance, more tools are available:

* Use `self.trainer.tower_func.towers` to access the
[tower handles](../modules/tfutils.html#tensorpack.tfutils.tower.TowerTensorHandles),
and therefore the tensors in each tower.
* [self.get_tensors_maybe_in_tower()](../modules/callbacks.html#tensorpack.callbacks.Callback.get_tensors_maybe_in_tower)
is a helper function to access tensors in the first training tower.
* [self.trainer.get_predictor()](../modules/train.html#tensorpack.train.TowerTrainer.get_predictor)
is a helper function to create a callable under inference mode.

This method is to separate between "define" and "run", and also to
avoid the common mistake to create ops inside
loops. All changes to the graph should be made in this method.

* `_before_train(self)`
Expand Down Expand Up @@ -87,16 +96,6 @@ to let this method run every k steps or every k epochs.
### What you can do in the callback

* Access tensors / ops in either training / inference mode (need to create them in `_setup_graph`).
* Use TF methods such as `self.graph.get_tensor_by_name`, to access tensors.

If you're using a `TowerTrainer` instance, more tools are available:
* Use `self.trainer.tower_func.towers` to access the
[tower handles](http://tensorpack.readthedocs.io/en/latest/modules/tfutils.html#tensorpack.tfutils.tower.TowerTensorHandles),
and therefore the tensors in each tower.
* [self.get_tensors_maybe_in_tower()](http://tensorpack.readthedocs.io/en/latest/modules/callbacks.html#tensorpack.callbacks.Callback.get_tensors_maybe_in_tower)
is a helper function to access tensors in the first training tower.
* [self.trainer.get_predictor()](http://tensorpack.readthedocs.io/en/latest/modules/train.html#tensorpack.train.TowerTrainer.get_predictor)
is a helper function to create a callable under inference mode.
* Write stuff to the monitor backend, by `self.trainer.monitors.put_xxx`.
The monitors might direct your events to TensorFlow events file, JSON file, stdout, etc.
You can get history monitor data as well. See the docs for [Monitors](../../modules/callbacks.html#tensorpack.callbacks.Monitors)
Expand Down
10 changes: 5 additions & 5 deletions docs/tutorial/input-source.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,14 +69,14 @@ handle corner cases in noisy data, preprocess, etc.
`InputSource` is an abstract interface in tensorpack, to describe where the inputs come from and how they enter the graph.
For example,

1. [FeedInput](http://tensorpack.readthedocs.io/en/latest/modules/input_source.html#tensorpack.input_source.FeedInput):
1. [FeedInput](../modules/input_source.html#tensorpack.input_source.FeedInput):
Come from a DataFlow and been fed to the graph.
2. [QueueInput](http://tensorpack.readthedocs.io/en/latest/modules/input_source.html#tensorpack.input_source.QueueInput):
2. [QueueInput](../modules/input_source.html#tensorpack.input_source.QueueInput):
Come from a DataFlow and been prefetched on CPU by a TF queue.
3. [StagingInput](http://tensorpack.readthedocs.io/en/latest/modules/input_source.html#tensorpack.input_source.StagingInput):
3. [StagingInput](../modules/input_source.html#tensorpack.input_source.StagingInput):
Come from some `InputSource`, then prefetched on GPU by a TF StagingArea.
4. Come from a DataFlow, and further processed by `tf.data.Dataset`.
5. [TensorInput](http://tensorpack.readthedocs.io/en/latest/modules/input_source.html#tensorpack.input_source.TensorInput):
Come from some TF reading ops. (See the [PTB example](../../tensorpack/tree/master/examples/PennTreebank))
5. [TensorInput](../modules/input_source.html#tensorpack.input_source.TensorInput):
Come from some TF reading ops. (See the [PTB example](../examples/PennTreebank))
6. Come from some ZMQ pipe, where the load/preprocessing may happen on a different machine.

4 changes: 2 additions & 2 deletions docs/tutorial/summary.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,11 +11,11 @@ This is how TensorFlow summaries eventually get logged/saved/printed:
1. __What to Log__: When you call `tf.summary.xxx` in your graph code, TensorFlow adds an op to
`tf.GraphKeys.SUMMARIES` collection (by default).
2. __When to Log__: [MergeAllSummaries](../modules/callbacks.html#tensorpack.callbacks.MergeAllSummaries)
callback is in the [default callbacks](http://tensorpack.readthedocs.io/en/latest/modules/train.html#tensorpack.train.DEFAULT_CALLBACKS).
callback is in the [default callbacks](../modules/train.html#tensorpack.train.DEFAULT_CALLBACKS).
It runs ops in the `SUMMARIES` collection (by default) every epoch (by default),
and writes results to the monitors.
3. __Where to Log__:
Several monitors are [default monitors](http://tensorpack.readthedocs.io/en/latest/modules/train.html#tensorpack.train.DEFAULT_MONITORS).
Several monitors are [default monitors](../modules/train.html#tensorpack.train.DEFAULT_MONITORS).
* A [TFEventWriter](../modules/callbacks.html#tensorpack.callbacks.TFEventWriter)
writes things to an event file used by tensorboard.
* A [ScalarPrinter](../modules/callbacks.html#tensorpack.callbacks.ScalarPrinter)
Expand Down
4 changes: 2 additions & 2 deletions docs/tutorial/trainer.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Tensorpack follows the "define-and-run" paradigm. A training has two steps:
This goal of this step is to define "what to run" in later training steps,
and it can happen either inside or outside tensorpack trainer.

2. __Run__: Train the model (the [Trainer.train() method](http://tensorpack.readthedocs.io/en/latest/modules/train.html#tensorpack.train.Trainer.train)):
2. __Run__: Train the model (the [Trainer.train() method](../modules/train.html#tensorpack.train.Trainer.train)):

1. Setup callbacks/monitors.
2. Finalize graph, initialize session.
Expand Down Expand Up @@ -50,7 +50,7 @@ These trainers will take care of step 1, by building the graph by itself, with t
3. A function which takes input tensors and returns the cost.
4. A function which returns an optimizer.

These are documented better in [SingleCostTrainer.setup_graph](http://tensorpack.readthedocs.io/en/latest/modules/train.html#tensorpack.train.SingleCostTrainer.setup_graph).
These are documented better in [SingleCostTrainer.setup_graph](../modules/train.html#tensorpack.train.SingleCostTrainer.setup_graph).
Often you'll not use this method directly, but use [high-level interface](training-interface.html#with-modeldesc-and-trainconfig)
instead.

Expand Down
12 changes: 6 additions & 6 deletions docs/tutorial/training-interface.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,12 +9,12 @@ when you don't want to customize too much.

__Define__: For general trainer, build the graph by yourself.
For single-cost trainer, build the graph by
[SingleCostTrainer.setup_graph](http://tensorpack.readthedocs.io/en/latest/modules/train.html#tensorpack.train.SingleCostTrainer.setup_graph).
[SingleCostTrainer.setup_graph](../modules/train.html#tensorpack.train.SingleCostTrainer.setup_graph).

__Run__: Then, call
[Trainer.train()](http://tensorpack.readthedocs.io/en/latest/modules/train.html#tensorpack.train.Trainer.train)
[Trainer.train()](../modules/train.html#tensorpack.train.Trainer.train)
or
[Trainer.train_with_defaults()](http://tensorpack.readthedocs.io/en/latest/modules/train.html#tensorpack.train.Trainer.train_with_defaults)
[Trainer.train_with_defaults()](../modules/train.html#tensorpack.train.Trainer.train_with_defaults)
which applies some defaults options for normal use cases.

### With ModelDesc and TrainConfig
Expand Down Expand Up @@ -48,7 +48,7 @@ class MyModel(ModelDesc):
You can use any symbolic functions in `_build_graph`, including TensorFlow core library
functions and other symbolic libraries.
But you need to follow the requirement of
[get_cost_fn](http://tensorpack.readthedocs.io/en/latest/modules/train.html#tensorpack.train.SingleCostTrainer.setup_graph),
[get_cost_fn](../modules/train.html#tensorpack.train.SingleCostTrainer.setup_graph),
because this function will be used as part of `get_cost_fn`.
At last you need to set `self.cost`.

Expand All @@ -69,7 +69,7 @@ trainer = SomeTrainer()
launch_train_with_config(config, trainer)
```
See the docs of
[TrainConfig](http://tensorpack.readthedocs.io/en/latest/modules/train.html#tensorpack.train.TrainConfig)
[TrainConfig](../modules/train.html#tensorpack.train.TrainConfig)
and
[launch_train_with_config](http://tensorpack.readthedocs.io/en/latest/modules/train.html#tensorpack.train.launch_train_with_config)
[launch_train_with_config](../modules/train.html#tensorpack.train.launch_train_with_config)
for usage and detailed functionalities.
3 changes: 2 additions & 1 deletion tensorpack/train/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,8 @@
from ..trainv1.base import StopTraining, TrainLoop
from ..trainv1.config import TrainConfig

__all__ = ['TrainConfig', 'Trainer', 'DEFAULT_MONITORS', 'DEFAULT_CALLBACKS']
__all__ = ['StopTraining', 'TrainConfig',
'Trainer', 'DEFAULT_MONITORS', 'DEFAULT_CALLBACKS']


def DEFAULT_CALLBACKS():
Expand Down

0 comments on commit a21eb14

Please sign in to comment.