-
Notifications
You must be signed in to change notification settings - Fork 3.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.
Already on GitHub? Sign in to your account
PL computes wrong accuracy with drop_last=False in PyTorch Geometric #6889
Comments
Hey @rusty1s, There is a definitely a bug in Lighting, but hard to resolve without deeper refactor.
|
Has this issue been resolved on master? I installed master, and still see the accuracy of 0.25 when running Matthias' code. |
Sorry, I just saw the comment to do the work-around. Thanks! |
馃悰 Bug
PyTorch Lightning computes wrong accuracy when using a
DataLoader
withdrop_last=False
in PyTorch Geometric.There seems to be an issue in which PL cannot determine the correct
batch_size
of mini-batches.Here, I am using a dataset with 3 examples and utilize a
batch_size
of 2. Intest_step
, the accuracy of each individual mini-batch is:while PyTorch Lightning reports an overall accuracy of
0.25
.Expected behavior
Report accuracy of
0.33
.Environment
torch-geometric==master
Additional context
It seems like PL has problems determining the correct
batch_size
of batches when data doesn't follow the conventional[batch_size, ...]
format. However, it shouldn't have a problem in doing so since thebatch_size
can be easily inferred from theself.acc(y_hat, y_pred)
call.The text was updated successfully, but these errors were encountered: