Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question on max-length #20

Closed
yiwan-rl opened this issue Feb 1, 2018 · 3 comments
Closed

Question on max-length #20

yiwan-rl opened this issue Feb 1, 2018 · 3 comments

Comments

@yiwan-rl
Copy link

yiwan-rl commented Feb 1, 2018

Hi,

I think there is one problem on the max_length. Max_length is 20000 in your default setting. But the gym's internal max episode length is 10000.

In testing, when the number of steps == 10000, Player.done = True and not player.max_length. It's possible that player.info['ale.lives'] > 0 at this time. Now the condition
if player.done and player.info['ale.lives'] > 0 and not player.max_length:
satisfies

In such condition, you reset the environment. Now in EpisodicLifeEnv, self.was_real_done in environment.py is True and you actually reset the gym environment (which is correct). But your code doesn't treat this 10000 steps episode as a terminated episode, instead, your code assumes the episode doesn't terminate because player.info['ale.lives'] > 0.

@dgriff777
Copy link
Owner

Yeah usually its set to 10000 but had set to 20000 when trained the BeamriderNoFrameskip-v4 as that was limiting the score within 30mins of training and time limit in gym is actually 100,000 for that env. Forgot to change back.

This argument is to stop a stuck game from just running out all 100,000 steps to finish and instead start new game sooner as a lot of games only need 10000 step to complete.

if you see environment.py

        if lives < self.lives and lives > 0:
            # for Qbert sometimes we stay in lives == 0 condtion for a few frames
            # so its important to keep lives > 0, so that we only reset once
            # the environment advertises done.
            done = True
            self.was_real_done = False

this matches same functionality

sorry thought I had commented there it was a quick ugly hack and meant to clean that up. Had End of lives as episodic for env and model before but quickly change to just have end of lives episodic for model only to check performance matched fine either way

@yiwan-rl
Copy link
Author

yiwan-rl commented Feb 1, 2018

Yes, your concern is right.
If a user sets max-episode-length <= gym's internal max_episode_length, there is no problem. But if a user sets max-episode-length > gym's internal max_episode_length, it's always gym's internal max_episode_length to be reached first and gym terminates the episode. Now info['ale.lives'] > 0 and done = True and player.max_length = False.
Your code in test.py will execute:

        if player.done and player.info['ale.lives'] > 0 and not player.max_length:
            state = player.env.reset()
            player.eps_len += 2
            player.state = torch.from_numpy(state).float()
            if gpu_id >= 0:
                with torch.cuda.device(gpu_id):
                    player.state = player.state.cuda()

which is the branch that assumes episode hasn't terminated.
It would be good if you can remind users to set max-episode-length <= gym's internal max_episode_length.

@dgriff777
Copy link
Owner

haha yeah I guess you could set it over gym limit but it will still reset environment on max-episode-length setting which is fine for now. I guess I assumed it was obvious to set less or equal to gym setting. I'm fine with it for now. maybe will change but more likely will revert to previous wrapper settings as was temporarily trying to be more like baselines for comparison but not a fan of them

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants