-
Notifications
You must be signed in to change notification settings - Fork 347
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training with custom data models results very poor. #250
Comments
I have the same experience. I created a folder with 115 folders with cropped images of dogs and uploaded it to Google Drive. I ran the MediaPipe Model Maker Image Classifier Demo with all defaults except for image_path and spec = image_classifier.SupportedModels (EfficientNet-Lite0). I expected to get a tflite model that is smaller, faster and more accurate when classifying dog images, compared to EfficientNet-Lite0. What I got is indeed smaller but the accuracy is lousy. I suspect this has to do with the metadata created by the model maker. If I display the metadata of the new model, it shows mean 0.0 std 255.0 while EfficientNet-Lite0 has mean 127.0 std 128.0. See below for details.
|
I have trained custom object detection model with relevant data (~750 annotated image x 3 augmantations)
There are 4 choices for pretrained models. And tested 4 of them with different learning rate (default 0.3)
But I tried different combinations with 300 -200 epoch learning reate 0.01 to 0.3. However, the results always very very poor. Ap50 is max 0.35 .
Here reproducted result.
What am I do wrong. ???
Note :
With same data I got ~86% at object detection with EfficientDet pretrained models as desription.
index created!
Running per image evaluation...
Evaluate annotation type bbox
DONE (t=1.95s).
Accumulating evaluation results...
DONE (t=0.17s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.082
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.318
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.008
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.070
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.351
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.118
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.182
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.184
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.167
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.422
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Validation loss: [1.0166717767715454, 0.42808738350868225, 0.010559567250311375, 0.9560657143592834]
Validation coco metrics: {'AP': 0.08189376, 'AP50': 0.31780246, 'AP75': 0.00782904, 'APs': 0.069966085, 'APm': 0.3512227, 'APl': -1.0, 'ARmax1': 0.11827957, 'ARmax10': 0.18172044, 'ARmax100': 0.18387097, 'ARs': 0.16743295, 'ARm': 0.42222223, 'ARl': -1.0}
The text was updated successfully, but these errors were encountered: