-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retraining custom object detector #24
Comments
For training, I used tensorflow-1.12.0. (I think both 1.11.x and 1.12.x would work.) And I used the exact version ('6518c1c') of object detection API as what's on my GitHub repo. I could optimize these trained SSD egohands models with TensorRT and run inference on Jetson Nano/TX2. Details could be found at: TensorRT UFF SSD. |
Hey thank you so much , I was able to successfully retrain both Mobilenet SSD V1 and V2 on my custom dataset and convert them to .uff and .bin , I got FPS rates of around 25 on average. My setup Training - GPU Telsa K80 with tensorflow gpu- 1.12 , Tensorflow object detection commit - '6518c1c' Inference and conversion - Jetson Nano Thanks jkjung again for your quick responses and suggestion. Cheers. |
Thanks for letting me know this update as well. |
@siddharthrameshiisc what was your cuda version with tensorflow==1.12.0?? |
This is my cuda setup (nvidia-smi output below)
Also I trained it on a conda environment. Did not install tensorflow via bazel. This setup and conversion works well with tensorflow GPU-1.11 as well. Just copy paste the following contents onto a ".yml" file and you can replicate my environment,
|
Thanks @siddharthrameshiisc |
@jkjung-avt I used Tensorflow gpu- 1.12 and Tensorflow object detection commit - '6518c1c' for training. For conversion on Jetson nano I used Tf 1.15 but still it gives following error:
|
I'm not sure what the problem is. As I have shared in my JetPack-4.3 for Jetson Nano blog post, I was able to use tensorflow-1.15.0 and the UFF converter (JetPack-4.3, TensorRT 6) to convert my custom trained "egohands" model to a TensorRT engine. If you'd like to do the comparison, my trained ssd_mobilenet_v2_egohands model checkpoint could be found here: #21 (comment) And the frozen_inference_graph.pb could be downloaded from my "jkjung-avt/tensorrt_demos" repository: https://github.com/jkjung-avt/tensorrt_demos/blob/master/ssd/ssd_mobilenet_v2_egohands.pb |
I used your ssd_mobilenet_v2_egohands.pb file and even the original pb file for conversion which is given in their repo but to my dismay both of them gave them the same error (different error this time)
|
Have you modified the 'input_order' in this line of code? https://github.com/jkjung-avt/tensorrt_demos/blob/master/ssd/build_engine.py#L69 |
Yes, now it is working perfectly. I think I was messing up the classes somewhere. Even though you detected only single class in your model but in your build_engine.py you had two classes? I think now I'll retrain my model from your repo only. |
Right. For tensorflow "ssd_mobilenet_v2_xxx" models, you need to add 1 to "num_classes" (for "background"). So "num_classes" is set to 2 for the egohands model. |
Thanks! My model is now running perfectly on custom data. |
Hello, I trained custom mobilenetv2_fn model with 6 classes on Tensorflow 1.15 version. Using output node NMS Please help me how to fix it. https://drive.google.com/drive/folders/1DrCFP3T0mFSm1GNzRp8aude-Ona6SoMz?usp=sharing |
Duplicate: #38 |
Hey can you share the particular commit or the version of the object detection API with which you trained the "Hand-detection-model"? So I have a custom dataset and I retrained it with tensorflow 1.15 and the latest object detection API (November 2019 commit) . I was unable to build the tensorrt engine. I encountered the following error.
[TensorRT] ERROR: UffParser: Validator error: FeatureExtractor/MobilenetV2/layer_19_2_Conv2d_4_3x3_s2_256/BatchNorm/FusedBatchNormV3: Unsupported operation _FusedBatchNormV3 [TensorRT] ERROR: Network must have at least one output
Tensorflow version for training and freezing - 1.15 (GPU Telsa K80)
Model used - http://download.tensorflow.org/models/object_detection/ssd_mobilenet_v2_coco_2018_03_29.tar.gz
Jetpack on Jetson Nano (used for Uff conversion and building TRT engine as per your trt_ssd tutorial)
TensorRT version - 5x
tensorflow 1.14
So I would like to retrain my custom dataset with the same version of Tensorflow and Obj detection API you had used for this tutorial. Could you please share some details regarding this. Or can you suggest a workaround for the error encountered.
The text was updated successfully, but these errors were encountered: