You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.
I tried running monobeast.py on different environments, including LunarLander-v2 and PongNoFrameskip-v4 but the model doesn't learn anything. I tried the hyperparameters for Pong that is written in the ReadMe.md but still nothing. The mean expected return does not move. I checked the gradients they are non-zero and the weights are being updated. However, during testing, I noticed that the agent always chooses the same action.
I am also getting A LOT of NaNs in the log file for the mean_expected_return, is this normal?
Any help would be appreciated, thanks!
The text was updated successfully, but these errors were encountered:
I have to acknowledge we never validated monobeast against the Atari suite. It cannot (easily) be run with the right batch size so it's tricky to use the known-good hyperparameters. I believe its design is sound in principle (CPU actors, one GPU learner, PyTorch shared memory tensors for inter-process communication) but there's lots of things that can go wrong with RL.
Re: The NaNs: I don't know what mean_expected_return is, I'm guessing you're seeing mean_episode_return? If so, we made the (perhaps questionable) decision to log NaN if there is no episode ending in the batch in the first place. This happens "naturally" via division by zero (the "mean" over no episodes). This implementation logs per-rollout numbers on the learner side and not all rollouts contain finished episodes. Since we also want to log speed, frame counts etc we felt we had to log something for mean_episode_return so we decided for the somewhat "natural" NaN there. Many plotting libraries ignore these values, which is what you'd want to do at that point. This approach has the obvious downside of scaring researchers, who are typically trained to associate NaNs with "something went very wrong".
Hi,
I tried running
monobeast.py
on different environments, includingLunarLander-v2
andPongNoFrameskip-v4
but the model doesn't learn anything. I tried the hyperparameters for Pong that is written in theReadMe.md
but still nothing. The mean expected return does not move. I checked the gradients they are non-zero and the weights are being updated. However, during testing, I noticed that the agent always chooses the same action.I am also getting A LOT of NaNs in the log file for the
mean_expected_return
, is this normal?Any help would be appreciated, thanks!
The text was updated successfully, but these errors were encountered: