Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Did you fine tune every model? #54

Closed
xuzhang5788 opened this issue Nov 29, 2020 · 7 comments
Closed

Did you fine tune every model? #54

xuzhang5788 opened this issue Nov 29, 2020 · 7 comments

Comments

@xuzhang5788
Copy link

Thank you so much for your great repo. From your demos, you always set epochs=100 for training. If we want to use some of the models, do we need to fine tune the hyperparameters and retrain them?

@kexinhuang12345
Copy link
Owner

Hi, ideally, i found depending on different dataset and task, the models are somewhat parameters sensitive. I would suggest do hyperparameter searching for individual use case. We also provide a tutorial on Bayesian Hyperparameter search in the demo: https://github.com/kexinhuang12345/DeepPurpose/blob/master/DEMO/Drug_Property_Pred-Ax-Hyperparam-Tune.ipynb

@kexinhuang12345
Copy link
Owner

the 100 epochs are following DeepDTA's implementation. But for small dataset, I usually find convergence in 10-20 epochs

@xuzhang5788
Copy link
Author

Thanks a lot

@xuzhang5788
Copy link
Author

@kexinhuang12345
Copy link
Owner

Hi, the data is from mit ai cures, you have to send an email to get it: https://www.aicures.mit.edu/forum

Checkout the open data section in https://www.aicures.mit.edu/data

@pykao
Copy link
Contributor

pykao commented Jan 7, 2021

Hi Kexin, do you have any reference paper for Bayesian hyper-parameter search?

@kexinhuang12345
Copy link
Owner

Hi, this is a good description of the BO using by Ax platform: https://ax.dev/docs/bayesopt.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants