Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RL-baseline] Model v4, experiment #3 #41

Open
wants to merge 2 commits into
base: RL-baseline-v4
Choose a base branch
from

Conversation

ziritrion
Copy link
Collaborator

The policy network for model v4 for REINFORCE with Baseline is essentially the same network as in v2, but the actor and critic heads have an additional fully connected layer each similar to v3. This tweak was added with the hopes of seeing the initial gains in reward that we observed with model v3 but with the elevated sustained reward value that we observed in v2.

The action sets are the same as in Model v3. For this experiment, action set #2 is chosen:
[0.0, 0.0, 0.0], # no action
[0.0, 0.8, 0.0], # throttle
[0.0, 0.0, 0.6], # break
[-0.9, 0.0, 0.0], # left
[0.9, 0.0, 0.0], # right

Results were disappointing. Running Reward was negative for most of the experiment, with a maximum peak of 321 right before the 2k episode mark but quickly dropping afterwards. Entropy and Loss function both collapsed before the 12k episode mark.

Results are below:
Notification_Center
Notification_Center

Sample video below:
https://user-images.githubusercontent.com/1465235/113128553-34593a80-921a-11eb-8e61-6e3150f119bd.mp4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant