-
Notifications
You must be signed in to change notification settings - Fork 275
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Calibration failure occurred with no scaling factors detected. #129
Comments
Hello. Step 1. First we need to modify line 58-60 in the https://github.com/NVIDIA/retinanet-examples/blob/master/extras/cppapi/export.cpp file:
by 1) specifying the calibration_files vector to contain paths to every calibration imgs you've used (the number of imgs should be at least twice your batch size)
and 3) setting string calibration_table = ""; Don't forget to redo make after modifying export.cpp. Step 2: do Step 3: for your export commands beyond the first export From the second export, you can use this generated table and remember to write the table name into the command line in
|
@jin-nvidia because Xavier can't install retinanet can you tell me how install full retinanet-examples in Xavier? |
Hello, please see my edited suggestions. To clarify, first let's get the onnx file from your x86, and then follow the above steps on your Xavier. No need to install retinanet on your Xavier. |
thank you jin-nvidia if I need calibrate /coco/images/val2017/ there are 5000 picture |
We don't necessarily need so many calibration images, we can use 2n images if n is your batch size. However, if you want to use many images, you could possible search for and use a c++ function that lists all filenames within a dir. |
thank you jin-nvidia this morning It seems have output ./infer engine_fp16.plan /home/nvidia/project/val2017/000000579307.jpg result is similar |
It looks like you have produced a model at FP16 and INT8. |
I use retinanet_rn50pn (https://github.com/NVIDIA/retinanet-examples/releases/download/19.04/retinanet_rn50fpn.zip) retinanet export retinanet_rn50pn.pth retinanet_rn50pn.onnx |
As you apply the repo to your own use case, you may be able to use a smaller backbone (eg RN34), or a smaller image size. Both of these will increase your inference speed. Also, batching your images together will help enormously. |
thank you James |
The C++ API is just a quick demo. If you have a folder of images then you might consider using the DeepStream SDK to infer them. |
@james-nvidia Writing to MAL_r50fpn_int8.plan.. |
Which branch are you using and in which container? |
|
|
why in GeForce GTX 1080 Ti but in 2080TI no this issue |
I had the same problem. details as follows: |
in cocker
1.retinanet export retinanet_rn50fpn.pth retinanet_rn50fpn.onnx
2.retinanet export retinanet_rn50fpn.pth retinanet_rn50fpn_int8_engine.pth --int8 --calibration-images /coco/images/val2017/
create an INT8CalibrationTable file(Int8CalibrationTable_ResNet50FPN1280x1280_10) that can be used to create INT8 TensorRT engines for the same model later on without needing to do calibration.
in Xavier
Building engine...
Building INT8 core model...
Building accelerated plugins...
Applying optimizations and building TRT CUDA engine...
Calibration failure occurred with no scaling factors detected. This could be due to no int8 calibrator or insufficient custom scales for network layers. Please see int8 sample to setup calibration correctly.
Builder failed while configuring INT8 mode.
Segmentation fault (core dumped)
The text was updated successfully, but these errors were encountered: