Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

storing training data internally #22

Merged
merged 9 commits into from
Aug 1, 2019
Merged

storing training data internally #22

merged 9 commits into from
Aug 1, 2019

Conversation

harbecke
Copy link
Owner

@harbecke harbecke commented Jul 31, 2019

The training data is now only saved after the whole repeated_self_training script has run. There are several changes to the sample_config that you have to adopt, most importantly the old train_samples_pool_size is now samples_per_model * num_data_models. samples_per_model indicates how many samples a model generates, and num_data_models of how many of the last models we use data from.

The data is now saved under the same name as the model (in the data folder). You can start training with this data if you set load_initial_data=True (it should not be required to have the correct amout of data). Otherwise the initial training data is generated by a Random model.

Please let me know if this works on your system!

(the important commit is only 383431b, i just wanted to merge anyway and use a pull request)

@harbecke harbecke requested review from simonant and cleeff July 31, 2019 12:13
@simonant
Copy link
Collaborator

The creation of the not already existing puzzle data does not work for me. The problem seems to be that config cannot be deepcopied. When adding boardsize to config instead of puzzle_config everything seems to work. I do not know how to properly fix this.

@harbecke harbecke merged commit fd971fb into master Aug 1, 2019
@harbecke harbecke deleted the inception branch August 1, 2019 11:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants