Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TRAINING_MIN_BATCH_SIZE does not seem to affect anything #6

Closed
etienne87 opened this issue Feb 7, 2017 · 2 comments
Closed

TRAINING_MIN_BATCH_SIZE does not seem to affect anything #6

etienne87 opened this issue Feb 7, 2017 · 2 comments

Comments

@etienne87
Copy link

etienne87 commented Feb 7, 2017

In ThreadTrainer.py, I don't understand how the following lines are supposed to affect the batch size :

np.concatenate((x__, x_))
np.concatenate((r__, r_))
np.concatenate((a__, a_))

np.concatenate returns the merged array, but does not affect x__ or x_.

However, I do measure the TPS to drops. What sorcery is this ?

[Time: 404] [Episode: 213 Score: -1.0642] [RScore: 7.5345 RPPS: 281] [PPS: 282 TPS: 4] [NT: 2 NP: 3 NA: 4]

(The PPS/ TPS is overall low in my case because the game is a costly one running on remote desktop)

EDIT : i suggest to modify to :

x__ = np.concatenate((x__, x_))
r__ = np.concatenate((r__, r_))
a__ = np.concatenate((a__, a_))

but this does not affect TPS compared to other

@etienne87 etienne87 mentioned this issue Feb 7, 2017
@mbz
Copy link
Contributor

mbz commented Feb 7, 2017

yeap, that's a bug right there! thanks for noticing it. that's what happens when we don't test all the configurations. please submit a pull request if you can. otherwise I will go ahead and fix it.

@etienne87
Copy link
Author

etienne87 commented Feb 7, 2017

okey dokey

etienne87 pushed a commit to etienne87/GA3C that referenced this issue Feb 8, 2017
@mbz mbz closed this as completed in 2b7c267 Feb 8, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants