MPS uninitialized memory(?) causing errors in StatScores
(which cascade to other locations)
#2383
Labels
bug / fix
Something isn't working
help wanted
Extra attention is needed
Priority
Critical task/issue
v1.3.x
🐛 Bug
When using MPS (gpu on a M1 Mac), and using any metric that depends on StatScores (f1, acc, etc.), I get a bunch of obviously wrong results (huge accuracies (10^13), negative tp, fp, etc. when using "micro" averaging. ("macro" averaging also appears to be wrong)
I noticed this when batch sizes are smaller than the number of classes, or otherwise there isn't enough support to have every class represented in a batch.
To Reproduce
Use the statscores metric (or any dependent ones) with "gpu" in the Lightning trainer on a M1 macbook and not have any true positives.
Expected behavior
Correct metrics! I get the correct results when using CPU for the entire compute.
Environment
conda
,pip
, build from source): 1.3.1 (pipenv installed)The text was updated successfully, but these errors were encountered: