Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

introduce recurrent sac-discrete #1

Merged
merged 6 commits into from
Mar 1, 2022
Merged

introduce recurrent sac-discrete #1

merged 6 commits into from
Mar 1, 2022

Conversation

twni2016
Copy link
Owner

@twni2016 twni2016 commented Mar 1, 2022

This PR introduces recurrent SAC-discrete algorithm for POMDPs with discrete action space.
The code is heavily based on the SAC-discrete open-sourced code https://github.com/ku2482/sac-discrete.pytorch/blob/master/sacd/agent/sacd.py and the SAC-discrete paper https://arxiv.org/abs/1910.07207

We provide two sanity checks on classic gym discrete control environments: CartPole-v0 and LunarLander-v2. The commands for running Markovian and recurrent SAC-discrete algorithms are:

# CartPole
python3 policies/main.py --cfg configs/pomdp/cartpole/f/mlp.yml --target_entropy 0.7 --cuda -1
# CartPole-V
python3 policies/main.py --cfg configs/pomdp/cartpole/v/rnn.yml --target_entropy 0.7 --cuda 0
# Lunalander
python3 policies/main.py --cfg configs/pomdp/lunalander/f/mlp.yml --target_entropy 0.7 --cuda -1
# Lunalander-V
python3 policies/main.py --cfg configs/pomdp/lunalander/v/rnn.yml --target_entropy 0.5 --cuda 0

where target_entropy sets the ratio of target entropy: ratio * log(|A|).

  • CartPole: Markovian SAC-discrete is quite sensitive to target_entropy but can solve the task with max return 200:

Screen Shot 2022-03-01 at 2 33 15 AM

  • CartPole-V: recurrent SAC-discrete is robust to target_entropy and can solve the task with max return 200 within 10 episodes

Screen Shot 2022-03-01 at 2 18 30 AM

  • Lunalander: Markovian SAC-discrete is sensitive to target_entropy but can solve the task with return over 200:

Screen Shot 2022-03-01 at 10 29 56 AM

  • Lunalander-V: recurrent SAC-discrete is very sensitive to target_entropy but can nearly solve the task in one target_entropy value:

Screen Shot 2022-03-01 at 1 09 06 PM

@twni2016 twni2016 merged commit b3b928e into master Mar 1, 2022
@hai-h-nguyen
Copy link

Hi, I ran this command:
python3 policies/main.py --cfg configs/pomdp/cartpole/f/mlp.yml --target_entropy 0.7 --cuda -1
Even it seems to solve the domain, rl_loss/alpha, rl_loss/policy_loss, rl_loss/qf1_loss, l_loss/qf2_loss increase/decrease very quickly and do not seem to stop. Probably something weird going on here?

@twni2016
Copy link
Owner Author

twni2016 commented Apr 4, 2022

Hi,

Yes, I did observe Markovian SAC-Discrete is unstable for Cartpole across seeds. You may try disable auto-tuning the alpha, and grid search over fixed alpha, using

--noautomatic_entropy_tuning --entropy_alpha 0.1

I did not have much insight on it..

@hai-h-nguyen
Copy link

hai-h-nguyen commented Nov 2, 2022

Hi, when running recurrent SACD on my domains, there is often a time when the agent doesn't seem to change much (the learning curve is just straight - 0-15k timesteps like the figure below). Do you have any insight?
test

@twni2016
Copy link
Owner Author

twni2016 commented Nov 2, 2022

It seems that your task has sparse reward. I guess the entropy is high at the early learning stage and through training, the entropy decreases to a threshold where the agent can exploit its "optimal" behavior to receive some positive rewards.

@hai-h-nguyen
Copy link

Yeah, that's right. However, the agent does experience rewards during the period 0-10k, but the policy gradient doesn't seem to be large. And the policy during evaluation didn't change much, often not getting any success at all, even though it's not hard that sparse to get a positive reward.

@RobertMcCarthy97
Copy link

@hai-h-nguyen Did you ever find a solution to these issues? I am experiencing somewhat similar behaviour.

@hai-h-nguyen
Copy link

Hi @RobertMcCarthy97 , it might help to start alpha at a smaller value, like 0.01, than the starting value in the code (1.0). That is to make the agent explores less initially.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants