You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@WaqasSultani Hi, thank you for the awesome code base. I'm trying to train the model, but notice some minor differences between your code and the paper.
Learning rate: You mentioned in your paper that you use a lr=0.001, but in here, you set lr to 0.01.
Model weight L2 regularization. In your paper, you specified lambda3=0.01, but in here, you set it to 0.001.
Did you use learning rate decay? Or you just use constant learning rate during all 20K iterations? Thanks.
Is there a possibility to share your training log with me? I can get a AUC score of 74.4 really quick, like within 2K iterations, and then the accuracy remains the same. I couldn't get to 75.41 as you report in your paper. It would be really helpful if you can share your training log.
Thank you very much, looking forward to your reply.
The text was updated successfully, but these errors were encountered:
@WaqasSultani Hi, thank you for the awesome code base. I'm trying to train the model, but notice some minor differences between your code and the paper.
Learning rate: You mentioned in your paper that you use a
lr=0.001
, but in here, you setlr to 0.01
.Model weight L2 regularization. In your paper, you specified
lambda3=0.01
, but in here, you set it to0.001
.Did you use learning rate decay? Or you just use constant learning rate during all 20K iterations? Thanks.
Is there a possibility to share your training log with me? I can get a AUC score of 74.4 really quick, like within 2K iterations, and then the accuracy remains the same. I couldn't get to 75.41 as you report in your paper. It would be really helpful if you can share your training log.
Thank you very much, looking forward to your reply.
The text was updated successfully, but these errors were encountered: