Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add slimevolleygym into dizoo #17

Closed
5 of 11 tasks
zxzzz0 opened this issue Aug 2, 2021 · 5 comments · Fixed by #23
Closed
5 of 11 tasks

Add slimevolleygym into dizoo #17

zxzzz0 opened this issue Aug 2, 2021 · 5 comments · Fixed by #23
Assignees
Labels
env Questions about RL environment P0 Issue that must be fixed in short order

Comments

@zxzzz0
Copy link

zxzzz0 commented Aug 2, 2021

  • I have marked all applicable categories:
    • exception-raising bug
    • RL algorithm bug
    • system worker bug
    • system utils bug
    • code design/refactor
    • documentation request
    • new feature request
  • I have visited the readme and doc
  • I have searched through the issue tracker and pr tracker
  • I have mentioned version numbers, operating system and environment, where applicable (N/A)

slimevolleygym is a pong-like physics game env from the open source community. It follows standard OpenAI gym interface. Naive PPO self-play achieves scores of -0.371 ± 1.085 in slimeVolley-v0 env against built-in AI report.
It would be good to benchmark opendilab's league training and see if it can generate higher results.

@PaParaZz1 PaParaZz1 added P0 Issue that must be fixed in short order env Questions about RL environment labels Aug 3, 2021
@PaParaZz1
Copy link
Member

@LuciusMos PPO(with RNN and without RNN), on policy training, slimeVolley-v0 env

@zxzzz0
Copy link
Author

zxzzz0 commented Aug 3, 2021

To clarify let's use league training with 3 roles (MA, ME, LE) rather than pure self-play training. There is also a built-in AI and do not use that for training. After training is finished we can evaluate agents against built-in AI and compare the numeric numbers with the report above.

@PaParaZz1
Copy link
Member

Naive PPO and self-play PPO have worked well in #23, and we will push league demo in the following PR.

@zxzzz0
Copy link
Author

zxzzz0 commented Oct 9, 2021

@PaParaZz1 Could you please create an issue for the following PR?

@PaParaZz1
Copy link
Member

@PaParaZz1 Could you please create an issue for the following PR?

ok I will create a new issue about league training for slime volleyball env this Tuesday.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
env Questions about RL environment P0 Issue that must be fixed in short order
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants