Skip to content

SSD-MobilenetV2 - Preprocessing for inference from saved_model_tf2 #10265

@Shubhambindal2017

Description

@Shubhambindal2017

I have fine-tuned an SSD-Mobilenetv2 with train config fixed resize 300x300 built using tensorflow objection detection API and saved in TF Saved_Model format.
Questions:

  • How during inference it is able to accept input images of any shape (not 300x300) without the need for any preprocessing to resize to 300x300 first and then pass to the model.
  • Does it is because saved_model by default does resize during inference? (If yes, does it also normalize them because before doing convolution operations) (I am new to saved_model format but I think it is not because of saved_model, but then how is it possible - as I think SSD-Mobilenet includes FC layers which require fixed input size) OR does the architecture uses AdaptivePooling in b/w to achieve this.

In simple words - Documentation is not clear - regarding what pre-processing (Resize / Normalization) steps are required to inference from saved_model format. Here too - no pre-processing like resizing and normalization is applied to input image. https://github.com/tensorflow/models/blob/master/research/object_detection/colab_tutorials/inference_from_saved_model_tf2_colab.ipynb

Metadata

Metadata

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions