You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi:
When I use pre-trained model finetune downstream graph_classification dataset RDT-B and RDT-M, I meet one problem. In deep learning finetune process, RDT-B and RDT-M, the train accuracy is very high, sometimes near to 1 while test accuracy is very low, it should be overfitting problem. Have you met this before and how did you deal with it if it does?
Thanks.
The text was updated successfully, but these errors were encountered:
Thanks for your interest in this repo. This is not expected behavior of this repository and we haven't met this. Could please check you package versions and post your python, torch, dgl versions, and reproducible training outputs here? Thanks!
python 3.7.7 dgl-cu101 0.4.1
In finetune process, I run 100 epochs. When testing rdt-b dataset, the 100th epoch train_accuracy is 0.991 while the test_accuracy is 0.835
Hi:
When I use pre-trained model finetune downstream graph_classification dataset RDT-B and RDT-M, I meet one problem. In deep learning finetune process, RDT-B and RDT-M, the train accuracy is very high, sometimes near to 1 while test accuracy is very low, it should be overfitting problem. Have you met this before and how did you deal with it if it does?
Thanks.
The text was updated successfully, but these errors were encountered: