Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update RELEASE.md with Estimator 1.14 release notes. #29364

Merged
merged 1 commit into from Jun 3, 2019
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
5 changes: 3 additions & 2 deletions RELEASE.md
Expand Up @@ -122,8 +122,9 @@
* XLA
* XLA HLO graphs can be inspected with interactive_graphviz tool now.
* Estimator
* Use tf.compat.v1.estimator.inputs instead of tf.estimator.inputs
* Replace contrib references with tf.estimator.experimental.* for apis in early_stopping.py
* Use `tf.compat.v1.estimator.inputs` instead of `tf.estimator.inputs`
* Replace `contrib` references with `tf.estimator.experimental.*` for APIs in `early_stopping.py`
* Determining the “correct” value of the `--iterations_per_loop` for TPUEstimator or DistributionStrategy continues to be a challenge for our users. We propose dynamically tuning the `--iterations_per_loop` variable, specifically for using TPUEstimator in training mode, based on a user target TPU execution time. Users might specify a value such as: `--iterations_per_loop=300s`, which will result in roughly 300 seconds being spent on the TPU between host side operations.


## Thanks to our Contributors
Expand Down