You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When the ignore_index argument is set to a value, the MulticlassAccuracy object sets the class accuracy to zero, but considers it in the average calculation.
To Reproduce
importtorchfromtorchmetrics.classificationimportMulticlassAccuracy# simulate the output of a perfect predictor (i.e. preds == target)target=torch.tensor([0, 1, 2, 0, 1, 2])
preds=targetmetric=MulticlassAccuracy(num_classes=3, average='none', ignore_index=0)
res=metric(preds, target)
print(res)
# it prints [0., 1., 1.]metric=MulticlassAccuracy(num_classes=3, average='macro', ignore_index=0)
res=metric(preds, target)
print(res)
# it prints 0.6667 instead of 1
Expected behavior
It should not take into account the ignored class.
Environment
TorchMetrics version: 0.11.4 installed from pip
Python & PyTorch Version: 3.10.6 - 1.13.0
Any other relevant information such as OS (e.g., Linux): Ubuntu 22.04
The text was updated successfully, but these errors were encountered:
Actually, the macro accuracy is still wrong when ignore_index is set. It is not covered by the above example (preds == target). The issue is that the FP for the ignore_index class are still taken into account. The metric used to work for older version of torchmetrics, such as 0.9.3.
🐛 Bug
When the ignore_index argument is set to a value, the MulticlassAccuracy object sets the class accuracy to zero, but considers it in the average calculation.
To Reproduce
Expected behavior
It should not take into account the ignored class.
Environment
The text was updated successfully, but these errors were encountered: