Skip to content
This repository has been archived by the owner on Feb 3, 2023. It is now read-only.

MLP Model Training has some stochasticity - double check seeds etc. #29

Closed
j6mes opened this issue Jan 31, 2018 · 2 comments
Closed

MLP Model Training has some stochasticity - double check seeds etc. #29

j6mes opened this issue Jan 31, 2018 · 2 comments

Comments

@j6mes
Copy link
Member

j6mes commented Jan 31, 2018

No description provided.

@j6mes
Copy link
Member Author

j6mes commented Jan 31, 2018

Training is OK and repeatable - might be an issue with the preprocessing/generating the vocab and TF-IDF vectors with NLTK

@j6mes
Copy link
Member Author

j6mes commented Jan 31, 2018

There was actually no issue here with seeds or randomness as the eval scripts ran OK.

The problem was the best weights were not loaded again at the end of training before the scores were printed to the stdout. However, if you run the eval script after training, the results are in line with the values reported in the paper.

I just edited the training script to load the best weights after training. @christos-c

@j6mes j6mes closed this as completed Jan 31, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant