Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Model Saving Mechanism #35

Closed
StepNeverStop opened this issue Jan 8, 2021 · 0 comments
Closed

Implement Model Saving Mechanism #35

StepNeverStop opened this issue Jan 8, 2021 · 0 comments
Assignees
Labels
discussion Need more discuss or analyse

Comments

@StepNeverStop
Copy link
Owner

include save model based on:

  • training time cost
  • score performance of under-training policy
  • training timestep
  • ...
@StepNeverStop StepNeverStop self-assigned this Jan 8, 2021
@StepNeverStop StepNeverStop added the discussion Need more discuss or analyse label Jan 8, 2021
StepNeverStop added a commit that referenced this issue Jul 28, 2021
1. removed sarl off-policy algorithm pd_ddpg, 'cause it's not in main stream
2. updated README
3. removed `iql` and added script `IndependentMA.py` instead to implement independent multi-agent algorithms
4. optimized summary writing
5. move NamedDict from 'rls.common.config' to 'rls.common.specs'
6. updated example config
7. updated `.gitignore`
8. added property `is_multi` to identify whether training task is for sarl or marl for both unity and gym
9. reconstructed inheritance relationships between algorithms and their's superclass
10. removed `1.e+18` in yaml files and use a large integer number instead, 'cause we want a large integer rather than float
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion Need more discuss or analyse
Projects
None yet
Development

No branches or pull requests

1 participant