You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Commit (shown by the server when starting): 23bd913
Your question / the problem you're facing:
I am observing weird predictions (with tensorrt and a refinedet model) associated to the last version of DeepDetect.
The predictions seem really off.
I have created a script to replicate.
It will launch predictions on dd's version from v0.15.0 to v0.18.0 with and without tensorrt.
Then it dumps the predictions and a hash is computed on each prediction file (we keep only the predicions' list). We observe that the v0.18.0 trt is not consistent with its caffe version or with the previous trt models.
Please fill in the script the following env variables and make sure that you have a gpu available for testing.
BASE_PATH=TODO
LOGGING_FOLDER=TODO
and then simply launch the script
bash pred_trt_refinedet_issue.sh
You should get the following output at then end (all the docker logs are not shown here):
Here we compute the sha256sum of the predictions obtained.
For the caffe models nothing changes however we observe differences for the trt model of the last version of dd v0.18.0.
Compare deepdetect_gpu
PATH_LOGS/prediction_deepdetect_gpu_v0.15.0.json: 9e056b235be08f7245bdd324ac8ca756c41353771fcb3004df2f6b6347326d63 -
PATH_LOGS/prediction_deepdetect_gpu_v0.16.0.json: 9e056b235be08f7245bdd324ac8ca756c41353771fcb3004df2f6b6347326d63 -
PATH_LOGS/prediction_deepdetect_gpu_v0.17.0.json: 9e056b235be08f7245bdd324ac8ca756c41353771fcb3004df2f6b6347326d63 -
PATH_LOGS/prediction_deepdetect_gpu_v0.18.0.json: 9e056b235be08f7245bdd324ac8ca756c41353771fcb3004df2f6b6347326d63 -
Compare deepdetect_gpu_tensorrt
PATH_LOGS/prediction_deepdetect_gpu_tensorrt_v0.15.0.json: 51767470062ecba3d77e765c34bed6000cf175400d5ff59dda9b4727356f49b5 -
PATH_LOGS/prediction_deepdetect_gpu_tensorrt_v0.16.0.json: 51767470062ecba3d77e765c34bed6000cf175400d5ff59dda9b4727356f49b5 -
PATH_LOGS/prediction_deepdetect_gpu_tensorrt_v0.17.0.json: 51767470062ecba3d77e765c34bed6000cf175400d5ff59dda9b4727356f49b5 -
PATH_LOGS/prediction_deepdetect_gpu_tensorrt_v0.18.0.json: 1508b68447819ff281231ad5c757e88f4a651f50570115565438ac9fee88d566 -
Expected predictions
[
{
"classes": [
{
"last": true,
"bbox": {
"ymax": 350.2694091796875,
"xmax": 745.9049682617188,
"ymin": 108.38544464111328,
"xmin": 528.0482788085938
},
"prob": 0.9999849796295166,
"cat": "1"
}
],
"uri": "https://icour.fr/ELeveSeconde/ajout/yann_lecum_vidal/images/yann_LeCun.jpg"
}
]
Anormal predictions for trt v0.18.0
[
{
"classes": [
{
"last": true,
"bbox": {
"ymax": 239.68505859375,
"xmax": 425.599365234375,
"ymin": 0,
"xmin": 211.946044921875
},
"prob": 1,
"cat": "1"
}
],
"uri": "https://icour.fr/ELeveSeconde/ajout/yann_lecum_vidal/images/yann_LeCun.jpg"
}
The text was updated successfully, but these errors were encountered:
Indeed we have a few tests (we need to add some more) but they are deactivated due to dependancies problems (compatibility between versions of tensorrt, tensorrt-oss, cudnn , ubuntu and correspnding docker images... )
Hopefully we will be able to integrate/activate them with TRT 8.x
pred_trt_refinedet_issue.zip
Configuration
23bd913
Your question / the problem you're facing:
I am observing weird predictions (with tensorrt and a refinedet model) associated to the last version of DeepDetect.
The predictions seem really off.
I have created a script to replicate.
It will launch predictions on dd's version from v0.15.0 to v0.18.0 with and without tensorrt.
Then it dumps the predictions and a hash is computed on each prediction file (we keep only the predicions' list). We observe that the v0.18.0 trt is not consistent with its caffe version or with the previous trt models.
Please fill in the script the following env variables and make sure that you have a gpu available for testing.
BASE_PATH=TODO
LOGGING_FOLDER=TODO
and then simply launch the script
You should get the following output at then end (all the docker logs are not shown here):
The text was updated successfully, but these errors were encountered: