-
Notifications
You must be signed in to change notification settings - Fork 267
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Did you fine tune every model? #54
Comments
Hi, ideally, i found depending on different dataset and task, the models are somewhat parameters sensitive. I would suggest do hyperparameter searching for individual use case. We also provide a tutorial on Bayesian Hyperparameter search in the demo: https://github.com/kexinhuang12345/DeepPurpose/blob/master/DEMO/Drug_Property_Pred-Ax-Hyperparam-Tune.ipynb |
the 100 epochs are following DeepDTA's implementation. But for small dataset, I usually find convergence in 10-20 epochs |
Thanks a lot |
I want to try https://github.com/kexinhuang12345/DeepPurpose/blob/master/DEMO/Drug_Property_Pred-Ax-Hyperparam-Tune.ipynb, but I can't find the data |
Hi, the data is from mit ai cures, you have to send an email to get it: https://www.aicures.mit.edu/forum Checkout the open data section in https://www.aicures.mit.edu/data |
Hi Kexin, do you have any reference paper for Bayesian hyper-parameter search? |
Hi, this is a good description of the BO using by Ax platform: https://ax.dev/docs/bayesopt.html |
Thank you so much for your great repo. From your demos, you always set epochs=100 for training. If we want to use some of the models, do we need to fine tune the hyperparameters and retrain them?
The text was updated successfully, but these errors were encountered: