Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tune] temporary revert of verbosity changes #12132

Merged
merged 1 commit into from
Nov 19, 2020

Conversation

richardliaw
Copy link
Contributor

@richardliaw richardliaw commented Nov 18, 2020

Why are these changes needed?

These verbosity changes lead to unintuitive logs from RLlib.

In the next reversion PR, we should be careful to test the changes against RLlib's various formats:

  • rllib train -f tuned_examples/pg/cartpole-pg.yaml
  • tune.run(DQNTrainer)

(pid=45727) WARNING:tensorflow:From /Users/rliaw/miniconda3/lib/python3.7/site-packages/tensorflow/python/compat/v2_compat.py:96: disable_resource_variables (from tensorflow.python.ops.variable_scope) is deprecated and will be removed in a future version.
(pid=45727) Instructions for updating:
(pid=45727) non-resource variables are not supported in the long term
(pid=45727) 2020-11-18 12:45:51,950	INFO trainer.py:580 -- Tip: set framework=tfe or the --eager flag to enable TensorFlow eager execution
(pid=45727) 2020-11-18 12:45:51,950	INFO trainer.py:607 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
(pid=45727) WARNING:tensorflow:From /Users/rliaw/miniconda3/lib/python3.7/site-packages/tensorflow/python/ops/resource_variable_ops.py:1666: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version.
(pid=45727) Instructions for updating:
(pid=45727) If using Keras pass *_constraint arguments to layers.
(pid=raylet) 2020-11-18 12:45:52,308	WARNING util.py:43 -- Install gputil for GPU system monitoring.
== Status ==
Memory usage on this node: 27.2/64.0 GiB
Using FIFO scheduling algorithm.
Resources requested: 1/16 CPUs, 0/0 GPUs, 0.0/24.17 GiB heap, 0.0/8.3 GiB objects
Result logdir: /Users/rliaw/ray_results/cartpole-pg
Number of trials: 1/1 (1 RUNNING)


== Status ==
Memory usage on this node: 27.3/64.0 GiB
Using FIFO scheduling algorithm.
Resources requested: 1/16 CPUs, 0/0 GPUs, 0.0/24.17 GiB heap, 0.0/8.3 GiB objects
Result logdir: /Users/rliaw/ray_results/cartpole-pg
Number of trials: 1/1 (1 RUNNING)


== Status ==
Memory usage on this node: 27.3/64.0 GiB
Using FIFO scheduling algorithm.
Resources requested: 1/16 CPUs, 0/0 GPUs, 0.0/24.17 GiB heap, 0.0/8.3 GiB objects
Result logdir: /Users/rliaw/ray_results/cartpole-pg
Number of trials: 1/1 (1 RUNNING)


== Status ==
Memory usage on this node: 27.4/64.0 GiB
Using FIFO scheduling algorithm.
Resources requested: 0/16 CPUs, 0/0 GPUs, 0.0/24.17 GiB heap, 0.0/8.3 GiB objects
Result logdir: /Users/rliaw/ray_results/cartpole-pg
Number of trials: 1/1 (1 TERMINATED)


2020-11-18 12:46:05,651	INFO tune.py:448 -- Total run time: 16.76 seconds (16.38 seconds for the tuning loop).

Related issue number

Checks

  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

This reverts commit 8609e2d.

Signed-off-by: Richard Liaw <rliaw@berkeley.edu>
@richardliaw richardliaw reopened this Nov 18, 2020
@richardliaw richardliaw assigned richardliaw and ericl and unassigned richardliaw Nov 18, 2020
@richardliaw richardliaw merged commit 2bb6db5 into ray-project:master Nov 19, 2020
@richardliaw richardliaw deleted the rllib-verbose branch November 19, 2020 02:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants