New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Different result despite same input #9
Comments
Thank you ! |
Well, it depends on a goal. If you want to compare hyperparameters then yeah, it could make sense to train on several seeds, and take e.g. an average, or a best model, or just compute variance. But are results really that different in different runs? |
No. They're not different on different runs: |
How do I set the random seed? |
@iamhuy @severinsimmler Hi, I encountered the same problem, but after I set |
I am also getting different results while running it on different environment: Does anyone here has some idea whats going on? |
I tried to create some CRF instances to train with the same training set and same max_iteration param.
However, their result is different ( I tested them on the same develop set with fmeasure).
Hope to see your response soon.
Thank you
The text was updated successfully, but these errors were encountered: