-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cross validation #2
Comments
No, we only keep 20% of original training set as validation set, then use grid search to find the best hyper parameters within certain range. You can easily implement it. |
Thank you. I have another question. After I train the model, I run the prediction multiple times with the same dataset, but it always gives different AUCs. Would you please explain why it comes with this? Sorry, I am a beginner in this field. |
could I ask how big is the difference? maybe because of random seed number. |
for the ALKBH5 dataset, I run the training once, then using the same model and feed the same testing dataset for multiple times, I got the AUC ranging from 0.67-0.7 |
Can you add the code |
Problem solved. Thank you so much |
Does the ideepe.py implement the cross-validation to fine tune the hyperparameter?
The text was updated successfully, but these errors were encountered: