New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ssd mobilenet v3 quantization-aware training failed #8331
Comments
same |
I ran into the same issue and resolved it by using the the ssd_mobilenet_edgetpu_coco checkpoint instead of the ssd_mobilenet_v3_large_coco checkpoint. The edgetpu checkpoint also uses a mobilenetV3 but with operations tailored for edge deployment. |
I have the same issue and I doesn't want to use checkpoint (I need another resolution / train own model). |
@skavulya how about it train on mobilenetv3 small ,I found the edgetpu checkpoint still met the same problem |
I found the cause of the error. If you comment or remove the following line from the config file, then the error will disappear at validation (training). I don’t know what these changes will affect. Does anyone know what this setting is for? To increase learning speed? P.S. |
I used both the edgetpu checkpoint and pipeline config file for training. I exported the quantized int8 model to tflite and it looks good. The main difference between the mobilenetv3 small/large and ssd edgetpu is that the edgetpu uses the ssd_mobilenet_edgetpu feature extractor. The feature extractor also uses mobilenetv3 but has I think you can extend the edge tpu feature extractor and modify the from_layers to change the size of your mobilenet to a smaller one. The edgetpu pipeline file has the |
The way i did was just deleted the checkpoint from the folder and added these lines to the config file fine_tune_checkpoint: "PATH_TO_BE_CONFIGURED/model.ckpt" It actually restores the parameters from the provided model.ckpt file of mobilenet v3 and creates a new checkpoint file. My training actually started and i did trained it for about 23k steps and got a loss of about ~0.2445. |
I set inplace_batchnorm_update: false and train normally. However, I want a quantization model. If I set false at the variable. The model is not quantized. Can someone show that how to fix the config (even inside the code) to make quantized ssdmobilenetv3? |
Hi @oliver8459 I'm now facing the same issue as you did, have you figured out how to fix it? |
Hi, any updates on this? |
pre-trained SSD-MobileNet V3 is trained on which data set ? |
always check the label order of generated TFRecord file and the labelmap.txt file, which is been loaded for detection |
maybe COCO or another big dataset |
In general to get a full quantized TFLite model, the model you using for transfer learning must be a model, with quantized weights and activation layers otherwise, forget
this could be helpful to read: https://github.com/tensorflow/tensorflow/tree/r1.15/tensorflow/contrib/quantize |
System information
Please provide the entire URL of the model you are using?
https://github.com/tensorflow/models/tree/master/research/object_detection
Describe the current behavior
ssdlite_mobilenet_v3 quantization-aware training results in the following error. After starting the training, tf.train.saver seems to cause an error.
I have code that reproduces the error on google colaboratory.
Complete quantization-aware training of mobilenet v3 of image classification model and deeplab model succeeds. Only the object detection model fails.
Describe the expected behavior
Complete quantization learning succeeds.
Code to reproduce the issue
https://gist.github.com/NobuoTsukamoto/b2ca173b62e933ceeb1c7f0df42bca5f
Other info / logs
log.txt
The text was updated successfully, but these errors were encountered: