-
Notifications
You must be signed in to change notification settings - Fork 373
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add slimevolleygym into dizoo #17
Comments
@LuciusMos PPO(with RNN and without RNN), on policy training, |
To clarify let's use league training with 3 roles (MA, ME, LE) rather than pure self-play training. There is also a built-in AI and do not use that for training. After training is finished we can evaluate agents against built-in AI and compare the numeric numbers with the report above. |
Naive PPO and self-play PPO have worked well in #23, and we will push league demo in the following PR. |
@PaParaZz1 Could you please create an issue for the following PR? |
ok I will create a new issue about league training for slime volleyball env this Tuesday. |
slimevolleygym is a pong-like physics game env from the open source community. It follows standard OpenAI gym interface. Naive PPO self-play achieves scores of
-0.371 ± 1.085
inslimeVolley-v0
env against built-in AI report.It would be good to benchmark opendilab's league training and see if it can generate higher results.
The text was updated successfully, but these errors were encountered: