Skip to content

Commit

Permalink
Update generated Python Op docs.
Browse files Browse the repository at this point in the history
Change: 121480495
  • Loading branch information
A. Unique TensorFlower authored and tensorflower-gardener committed May 4, 2016
1 parent 48bbc91 commit e5df6ad
Show file tree
Hide file tree
Showing 3 changed files with 224 additions and 0 deletions.
57 changes: 57 additions & 0 deletions tensorflow/g3doc/api_docs/python/contrib.layers.md
Original file line number Diff line number Diff line change
Expand Up @@ -664,3 +664,60 @@ Given loss and parameters for optimizer, returns a training op.
* <b>`ValueError`</b>: if optimizer is wrong type.


- - -

### `tf.contrib.learn.train(graph, output_dir, train_op, loss_op, global_step_tensor=None, init_op=None, log_every_steps=10, supervisor_is_chief=True, supervisor_master='', supervisor_save_model_secs=600, supervisor_save_summaries_secs=10, max_steps=None, fail_on_nan_loss=True, tuner=None)` {#train}

Train a model.

Given `graph`, a directory to write outputs to (`output_dir`), and some ops,
run a training loop. The given `train_op` performs one step of training on the
model. The `loss_op` represents the objective function of the training. It is
expected to increment the `global_step_tensor`, a scalar integer tensor
counting training steps. This function uses `Supervisor` to initialize the
graph (from a checkpoint if one is available in `output_dir`), write summaries
defined in the graph, and write regular checkpoints as defined by
`supervisor_save_model_secs`.

Training continues until `global_step_tensor` evaluates to `max_steps`, or, if
`fail_on_nan_loss`, until `loss_op` evaluates to `NaN`. In that case the
program is terminated with exit code 1.

##### Args:


* <b>`graph`</b>: A graph to train. It is expected that this graph is not in use
elsewhere.
* <b>`output_dir`</b>: A directory to write outputs to.
* <b>`train_op`</b>: An op that performs one training step when run.
* <b>`loss_op`</b>: A scalar loss tensor.
* <b>`global_step_tensor`</b>: A tensor representing the global step. If none is given,
one is extracted from the graph using the same logic as in `Supervisor`.
* <b>`init_op`</b>: An op that initializes the graph. If `None`, use `Supervisor`'s
default.
* <b>`log_every_steps`</b>: Output logs regularly. The logs contain timing data and the
current loss.
* <b>`supervisor_is_chief`</b>: Whether the current process is the chief supervisor in
charge of restoring the model and running standard services.
* <b>`supervisor_master`</b>: The master string to use when preparing the session.
* <b>`supervisor_save_model_secs`</b>: Save a checkpoint every
`supervisor_save_model_secs` seconds when training.
* <b>`supervisor_save_summaries_secs`</b>: Save summaries every
`supervisor_save_summaries_secs` seconds when training.
* <b>`max_steps`</b>: Train until `global_step_tensor` evaluates to this value.
* <b>`fail_on_nan_loss`</b>: If true, exit the program if `loss_op` evaluates to `NaN`.
Otherwise, continue training as if nothing happened.
* <b>`tuner`</b>: A tf.Tuner that will be notified of training failures when specified.

##### Returns:

The final loss value.

##### Raises:


* <b>`ValueError`</b>: If `global_step_tensor` is not provided. See
`tf.contrib.framework.get_global_step` for how we look it up if not
provided explicitly.


161 changes: 161 additions & 0 deletions tensorflow/g3doc/api_docs/python/contrib.learn.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,6 +45,49 @@ Attributes:



- - -

### `class tf.contrib.learn.SupervisorParams` {#SupervisorParams}

Parameters required to configure supervisor for training.

Fields:
is_chief: Whether the current process is the chief supervisor in charge of
restoring the model and running standard services.
master: The master string to use when preparing the session.
save_model_secs: Save a checkpoint every `save_model_secs` seconds when
training.
save_summaries_secs: Save summaries every `save_summaries_secs` seconds when
training.
- - -

#### `tf.contrib.learn.SupervisorParams.is_chief` {#SupervisorParams.is_chief}

Alias for field number 0


- - -

#### `tf.contrib.learn.SupervisorParams.master` {#SupervisorParams.master}

Alias for field number 1


- - -

#### `tf.contrib.learn.SupervisorParams.save_model_secs` {#SupervisorParams.save_model_secs}

Alias for field number 2


- - -

#### `tf.contrib.learn.SupervisorParams.save_summaries_secs` {#SupervisorParams.save_summaries_secs}

Alias for field number 3



- - -

### `class tf.contrib.learn.TensorFlowClassifier` {#TensorFlowClassifier}
Expand Down Expand Up @@ -2628,6 +2671,64 @@ Returns weights of the linear regression.



- - -

### `tf.contrib.learn.evaluate(graph, output_dir, checkpoint_path, eval_dict, global_step_tensor=None, init_op=None, supervisor_master='', log_every_steps=10, max_steps=None, max_global_step=None, tuner=None, tuner_metric=None)` {#evaluate}

Evaluate a model loaded from a checkpoint.

Given `graph`, a directory to write summaries to (`output_dir`), a checkpoint
to restore variables from, and a `dict` of `Tensor`s to evaluate, run an eval
loop for `max_steps` steps.

In each step of evaluation, all tensors in the `eval_dict` are evaluated, and
every `log_every_steps` steps, they are logged. At the very end of evaluation,
a summary is evaluated (finding the summary ops using `Supervisor`'s logic)
and written to `output_dir`.

##### Args:


* <b>`graph`</b>: A `Graph` to train. It is expected that this graph is not in use
elsewhere.
* <b>`output_dir`</b>: A string containing the directory to write a summary to.
* <b>`checkpoint_path`</b>: A string containing the path to a checkpoint to restore.
Can be `None` if the graph doesn't require loading any variables.
* <b>`eval_dict`</b>: A `dict` mapping string names to tensors to evaluate for in every
eval step.
* <b>`global_step_tensor`</b>: A `Variable` containing the global step. If `None`,
one is extracted from the graph using the same logic as in `Supervisor`.
Used to place eval summaries on training curves.
* <b>`init_op`</b>: An op that initializes the graph. If `None`, use `Supervisor`'s
default.
* <b>`supervisor_master`</b>: The master string to use when preparing the session.
* <b>`log_every_steps`</b>: Integer. Output logs every `log_every_steps` evaluation
steps. The logs contain the `eval_dict` and timing information.
* <b>`max_steps`</b>: Integer. Evaluate `eval_dict` this many times.
* <b>`max_global_step`</b>: Integer. If the global_step is larger than this, skip
the eval and return None.
* <b>`tuner`</b>: A `Tuner` that will be notified of eval completion and updated
with objective metrics.
* <b>`tuner_metric`</b>: A `string` that specifies the eval metric to report to
`tuner`.

##### Returns:

A tuple `(eval_results, should_stop)`:

* <b>`eval_results`</b>: A `dict` mapping `string` to numeric values (`int`, `float`)
that are the eval results from the last step of the eval. None if no
eval steps were run.
should stop: A `bool`, indicating whether it was detected that eval should
stop.

##### Raises:


* <b>`ValueError`</b>: if the caller specifies max_global_step without providing
a global_step.


- - -

### `tf.contrib.learn.extract_dask_data(data)` {#extract_dask_data}
Expand Down Expand Up @@ -2663,3 +2764,63 @@ Extract data from pandas.DataFrame for labels
Extracts numpy matrix from pandas DataFrame.


- - -

### `tf.contrib.learn.infer(restore_checkpoint_path, output_dict, feed_dict=None)` {#infer}




- - -

### `tf.contrib.learn.run_feeds(output_dict, feed_dicts, restore_checkpoint_path=None)` {#run_feeds}

Run `output_dict` tensors with each input in `feed_dicts`.

If `checkpoint_path` is supplied, restore from checkpoint. Otherwise, init all
variables.

##### Args:


* <b>`output_dict`</b>: A `dict` mapping string names to `Tensor` objects to run.
Tensors must all be from the same graph.
* <b>`feed_dicts`</b>: Iterable of `dict` objects of input values to feed.
* <b>`restore_checkpoint_path`</b>: A string containing the path to a checkpoint to
restore.

##### Returns:

A list of dicts of values read from `output_dict` tensors, one item in the
list for each item in `feed_dicts`. Keys are the same as `output_dict`,
values are the results read from the corresponding `Tensor` in
`output_dict`.

##### Raises:


* <b>`ValueError`</b>: if `output_dict` or `feed_dicts` is None or empty.


- - -

### `tf.contrib.learn.run_n(output_dict, feed_dict=None, restore_checkpoint_path=None, n=1)` {#run_n}

Run `output_dict` tensors `n` times, with the same `feed_dict` each run.

##### Args:


* <b>`output_dict`</b>: A `dict` mapping string names to tensors to run. Must all be
from the same graph.
* <b>`feed_dict`</b>: `dict` of input values to feed each run.
* <b>`restore_checkpoint_path`</b>: A string containing the path to a checkpoint to
restore.
* <b>`n`</b>: Number of times to repeat.

##### Returns:

A list of `n` `dict` objects, each containing values read from `output_dict`
tensors.


6 changes: 6 additions & 0 deletions tensorflow/g3doc/api_docs/python/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -524,17 +524,23 @@
* [`summarize_collection`](../../api_docs/python/contrib.layers.md#summarize_collection)
* [`summarize_tensor`](../../api_docs/python/contrib.layers.md#summarize_tensor)
* [`summarize_tensors`](../../api_docs/python/contrib.layers.md#summarize_tensors)
* [`train`](../../api_docs/python/contrib.layers.md#train)
* [`variance_scaling_initializer`](../../api_docs/python/contrib.layers.md#variance_scaling_initializer)
* [`xavier_initializer`](../../api_docs/python/contrib.layers.md#xavier_initializer)
* [`xavier_initializer_conv2d`](../../api_docs/python/contrib.layers.md#xavier_initializer_conv2d)

* **[Learn (contrib)](../../api_docs/python/contrib.learn.md)**:
* [`evaluate`](../../api_docs/python/contrib.learn.md#evaluate)
* [`extract_dask_data`](../../api_docs/python/contrib.learn.md#extract_dask_data)
* [`extract_dask_labels`](../../api_docs/python/contrib.learn.md#extract_dask_labels)
* [`extract_pandas_data`](../../api_docs/python/contrib.learn.md#extract_pandas_data)
* [`extract_pandas_labels`](../../api_docs/python/contrib.learn.md#extract_pandas_labels)
* [`extract_pandas_matrix`](../../api_docs/python/contrib.learn.md#extract_pandas_matrix)
* [`infer`](../../api_docs/python/contrib.learn.md#infer)
* [`run_feeds`](../../api_docs/python/contrib.learn.md#run_feeds)
* [`run_n`](../../api_docs/python/contrib.learn.md#run_n)
* [`RunConfig`](../../api_docs/python/contrib.learn.md#RunConfig)
* [`SupervisorParams`](../../api_docs/python/contrib.learn.md#SupervisorParams)
* [`TensorFlowClassifier`](../../api_docs/python/contrib.learn.md#TensorFlowClassifier)
* [`TensorFlowDNNClassifier`](../../api_docs/python/contrib.learn.md#TensorFlowDNNClassifier)
* [`TensorFlowDNNRegressor`](../../api_docs/python/contrib.learn.md#TensorFlowDNNRegressor)
Expand Down

0 comments on commit e5df6ad

Please sign in to comment.