Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use the Experiments? #187

Closed
labarababa opened this issue Aug 29, 2019 · 6 comments
Closed

How to use the Experiments? #187

labarababa opened this issue Aug 29, 2019 · 6 comments
Labels
Question Further information is requested

Comments

@labarababa
Copy link

Hello,

do you have any examples on how to use the (best) experiments? E.g saving the model making predictions with a valdiation set,, etc.

Kind Regards

@labarababa labarababa changed the title How to use the Experiments'? How to use the Experiments? Aug 29, 2019
@HunterMcGushion
Copy link
Owner

Thanks for opening this issue! I'm afraid I'm not understanding your question. Could you please be a bit more specific, or post some code to illustrate what you want to do?

Regarding validation predictions, all of the repo's examples show how HH creates validation sets automatically for you via the Environment kwargs cv_type and cv_params. HH also automatically makes predictions for the out-of-fold (OOF) datasets and evaluates those predictions, which you can see during Experiment logging. At the end of the Experiment/OptPro, all results (including OOF/Holdout/Test predictions) are automatically saved in the directory given to Environment's results_path, so you can find them there.

If you're wondering about Holdout/Test datasets, rather than validation, the holdout_test_datasets_example should be helpful.

For saving models, each library has different methods of doing this, but you can make a custom lambda_callback to save your models. Here's a simple lambda_callback_example.

As far as using your saved Experiments, their results are all stored in the directory above, so you're free to use them however you would normally use your results: Ensembling, averaging predictions, checking the Leaderboard to compare performance, etc.

Sorry I'm not quite getting your question. Please let me know if I missed anything, and thanks again for asking!

@HunterMcGushion HunterMcGushion added the Question Further information is requested label Aug 31, 2019
@labarababa
Copy link
Author

For saving models, each library has different methods of doing this, but you can make a custom lambda_callback to save your models. Here's a simple lambda_callback_example.

Thats it. So I can just write a function for saving the best model and use it together with a lambda_callback (on_run_end?) and dumb my model with joblib.

Thanks for your help.

@HunterMcGushion
Copy link
Owner

@labarababa, I've just pushed an example detailing how to make a lambda_callback for model-saving.
You can find the new example in PR #198. The broken Travis build is due to an unrelated issue.

Would you mind checking out the new example, and letting me know if that helps answer your question?

@labarababa
Copy link
Author

Yup, this solves the problem and its understandable und very detailed
Very good addtition to the examples as well.

Thank you for your efforts.

@HunterMcGushion
Copy link
Owner

Thanks for the great suggestion! I'll close this issue once it's merged. If you have any other questions or ideas, I'd love to hear them! Thanks for your time!

@HunterMcGushion
Copy link
Owner

Closed by #198

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants