-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Converted MobileNet-SSD Producing Irregular Bounding Boxes #279
Comments
Hard to say without more details. |
The mlmodel won't accept the numpy array. I get NSLocalizedDescription = "The model expects input feature Preprocessor__sub__0 to be an image, but the input is of type 5."; However if I input an image instead of a numpy array and compare it to the TF outputs I get: |
The errors seem quite small! |
I agree. If I hold my phone still the boxes are ok. Unfortunately as soon as the phone is moved the bounding boxes are terrible. It works ok with the ssd_mobilenet_v2_coco model, but terrible with my re-trained model. I'm guessing there may be some kind of loss during the conversion. I'm going to add some more epochs to the training and see if it improves things. |
We have the exact same issue after using (slightly modified) https://github.com/vonholst/SSDMobileNet_CoreML. I suspect that the anchors hardcoded here don't all quite match up with what the network was trained on. NMS then fails because the 2 "crossed" boxes don't actually have that much intersection. As per OP, some image/frames seem to work perfectly, but other slightly offset ones show 2 divergent boxes for each detection. I'm investigating the process outlined in https://github.com/hollance/coreml-survival-guide/tree/master/MobileNetV2%2BSSDLite https://machinethink.net/blog/mobilenet-ssdlite-coreml/, which moves anchor/box extraction and NMS into CoreML itself. |
@mabrowning could u pls share the code for how to move anchor/box extraction to CoreML ? I am using a ssd mobilenet V1, is there any difference in anchor/box extraction ? |
@lordrebel sure. Check my last comment. All the code is there.
|
Thanks!! @mabrowning , I compared the inference result of tensorflow model with coreml model I has converted, I found that coreml model lost some boxes (about 30% ), did you meet this problem ? @mabrowning @aseemw @miaout17 |
Was your score threshold set the same? The default/sample
ssd_mobilenet configs in the object_detection repo use 1e-4 as a
minimum score threshold, whereas that conversion script sets it to
something reasonable like 0.3.
|
yes ,both iou threshold and confidence thershold are same as TF model. and i compare the model structure using Netron. It looks quite different.@mabrowning |
Is there an update on this issue, please let us know if you're still experiencing this! Thanks |
I have retrained MobileNetSSD V1 and V2 models with my own dataset. They both work perfectly with tensorflow and the converted TFLite models work perfect with MLKit. However the same models converted to CoreML are a bit wonky. They both detect objects with correct bounding boxes at first, but then start producing overlapping bounding boxes that are not the correct size. One box is half the height and twice the width that it should be, and the other is twice the height and half the width that it should be. Confidences are .9 to 1
The models were converted as per the instructions found here https://github.com/tf-coreml/tf-coreml/blob/master/examples/ssd_example.ipynb with these settings:
coreml_model = tfcoreml.convert(
tf_model_path=frozen_model_file,
mlmodel_path=coreml_model_file,
input_name_shape_dict=input_tensor_shapes,
image_input_names="Preprocessor/sub:0",
output_feature_names=output_tensor_names,
image_scale=2./255.,
red_bias=-1.0,
green_bias=-1.0,
blue_bias=-1.0
)
Any idea what could be causing this?
The text was updated successfully, but these errors were encountered: