Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cross validation #2

Open
LCHCurtis opened this issue Jan 19, 2019 · 6 comments
Open

Cross validation #2

LCHCurtis opened this issue Jan 19, 2019 · 6 comments

Comments

@LCHCurtis
Copy link

Does the ideepe.py implement the cross-validation to fine tune the hyperparameter?

@xypan1232
Copy link
Owner

No, we only keep 20% of original training set as validation set, then use grid search to find the best hyper parameters within certain range. You can easily implement it.

@LCHCurtis
Copy link
Author

Thank you. I have another question. After I train the model, I run the prediction multiple times with the same dataset, but it always gives different AUCs. Would you please explain why it comes with this? Sorry, I am a beginner in this field.

@xypan1232
Copy link
Owner

could I ask how big is the difference? maybe because of random seed number.

@LCHCurtis
Copy link
Author

LCHCurtis commented Jan 20, 2019

for the ALKBH5 dataset, I run the training once, then using the same model and feed the same testing dataset for multiple times, I got the AUC ranging from 0.67-0.7

@xypan1232
Copy link
Owner

Can you add the code
np.random.seed(0)
torch.manual_seed(0)
before run_ideepe(args) in the file to fix the seed and retrain the model and do prediction?

@LCHCurtis
Copy link
Author

Problem solved. Thank you so much

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants