You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I ran your code on DrugOOD with 10 different seeds (0 ~ 9), but the results are significantly lower than those reported in your paper. For example, the ROC-AUC on ec50_assay and ec50_size are:
these results are all below those reported in your paper.
I have used the same hyper-parameters provided in the Appendix. Does that mean your proposed method is sensitive to the random seed?
Thanks!
The text was updated successfully, but these errors were encountered:
On the DrugOOD dataset, we searched for all hyper-parameters using the grid search method. Here is the demo script:
#!/bin/bash
hyper_list1=(choice1, choice2, ...)
hyper_list2=(choice1, choice2, ...)
# grid search on all hyper-parameters
for p1 in ${hyper_list1[*]}
do
for p2 in ${hyper_list2[*]}
do
python run.py --random_seed 0 --h1 p1 --h2 p2
python run.py --random_seed 1 --h1 p1 --h2 p2
python run.py --random_seed 2 --h1 p1 --h2 p2
done
done
The choices of hyper-parameters are detailed in the appendix.
We repeated the experiments with different random seeds for each group of hyper-parameters, and the results reported in the paper are for the best group. Also, we observed that due to the small size of the drugood dataset, it is prone to larger variance. But because of running many groups results out, the best one will be the one where the results are all a bit better.
I ran your code on DrugOOD with 10 different seeds (0 ~ 9), but the results are significantly lower than those reported in your paper. For example, the ROC-AUC on ec50_assay and ec50_size are:
these results are all below those reported in your paper.
I have used the same hyper-parameters provided in the Appendix. Does that mean your proposed method is sensitive to the random seed?
Thanks!
The text was updated successfully, but these errors were encountered: