Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RLlib] Create a set of performance benchmark tests to run nightly. #19945

Merged
merged 20 commits into from
Nov 8, 2021

Conversation

gjoliver
Copy link
Member

@gjoliver gjoliver commented Nov 1, 2021

Why are these changes needed?

  1. Track performances of the most important algorithms nightly.
    The way these tests are set up is that we run them for a fixed amount of time every day without passing criteria.
    We will then record the track the average reward achieved and throughput over time.

  2. Run all of the RLlib nightly and weekly tests in tf2 framework too.

  3. Write performance metrics in the output json file.

Related issue number

Checks

  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • [*] Unit tests
    • [*] Release tests
    • This PR is not tested :(

actor_learning_rate: 0.0003
critic_learning_rate: 0.0003
entropy_learning_rate: 0.0003
num_workers: 0
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we not use any workers for SAC halfcheetah?

@gjoliver
Copy link
Member Author

gjoliver commented Nov 3, 2021

Hi Sven, I'd love your opinion about the set of tests I want to run nightly.
The problem with running TF2 in nightly today is that it's really slow, so these tests take long time to finish and time out.
I need to play with auto-scaling to see if they can be parallelized.

@gjoliver gjoliver force-pushed the rllib-nightly branch 3 times, most recently from 0c0ca1c to b51e5fa Compare November 5, 2021 06:50
@gjoliver gjoliver changed the title Create a core set of algorithms tests to run nightly. [RLlib] Create a set of performance benchmark tests to run nightly. Nov 5, 2021
@sven1977 sven1977 self-assigned this Nov 5, 2021
@@ -0,0 +1,131 @@
apex-breakoutnoframeskip-v4:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we sort these by alphabet?

desired_throughput = None
# TODO(Jun): Stop checking throughput for now.
# desired_throughput = checks[experiment]["min_throughput"]
desired_throughput = checks[experiment]["min_throughput"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe for a later PR: Should we measure env-throughput and learner-throughput separately?

else:
keys.append(re.sub("^(\\w+)-", "\\1-torch-", k))
experiments[keys[0]] = e
# Generate `checks` dict for all experiments (tf, tf2 and/or torch).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just a question: This means we'll include tf2 in our weekly learning tests as well as our nightly multi-gpu tests, correct? I think the multi-gpu one would fail since RLlib+tf2 does not support multi-gpu yet. Could you check this? We should probably have a way to specify, which frameworks to test in the yaml files.
Like:

...
  config:
    frameworks: [tf, torch]  # instead of "framework": expands `experiments_to_run`, then removes the "frameworks" key from the struct

Copy link
Contributor

@sven1977 sven1977 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just some questions about things that may break with this change wrt nightly multi-gpu tests and weekly release tests.

@gjoliver
Copy link
Member Author

gjoliver commented Nov 6, 2021

all done. thanks for the thoughtful review.
I have also updated all the framework config to be frameworks, so we get to specify a list of frameworks to test with.
ptal, thanks.

Copy link
Contributor

@sven1977 sven1977 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this cool PR. Loving it! :)

@sven1977 sven1977 merged commit d8a61f8 into ray-project:master Nov 8, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants