-
Notifications
You must be signed in to change notification settings - Fork 769
Mismatching Pytorch Model Output vs Converted Core ML Model #489
Copy link
Copy link
Closed
Labels
NN backend onlyAffects only the NN backend (not MIL backend)Affects only the NN backend (not MIL backend)awaiting responsePlease respond to this issue to provide further clarification (status)Please respond to this issue to provide further clarification (status)questionResponse providing clarification needed. Will not be assigned to a release. (type)Response providing clarification needed. Will not be assigned to a release. (type)
Metadata
Metadata
Assignees
Labels
NN backend onlyAffects only the NN backend (not MIL backend)Affects only the NN backend (not MIL backend)awaiting responsePlease respond to this issue to provide further clarification (status)Please respond to this issue to provide further clarification (status)questionResponse providing clarification needed. Will not be assigned to a release. (type)Response providing clarification needed. Will not be assigned to a release. (type)
I fine tuned and trained a resnet50 on Pytorch for 8 labels instead of 1000.

When testing the model/passing in images on Pytorch I get the correct tensor of probabilities as seen below.
On the Core ML model I get 0 for everything except for one label("living room") every single time, in which case I get 1. See below

Why could they not be matching? I've compared the resnet50 model provided by Apple converted in CoreML to mine converted from Pytorch -> ONNX -> CoreML in Netron and I don't understand why I'm getting mismatching confidence values from pytorch vs CoreML.
Thank you in advance!
System/Model Information + Conversion Snippets