You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
hi,professor:
i fllow your code, and try convert to tensorrt code, i convert backbone net to onnx format ok, i use netron , then input size is
,but i debug your ncnn-macos code, i find code below: ncnn::Mat ncnn_img = ncnn::Mat::from_pixels(z_crop.data, ncnn::Mat::PIXEL_BGR2RGB, z_crop.cols, z_crop.rows);
printf("ncnn_img.w:%d,ncnn_img.h:%d\n",ncnn_img.w,ncnn_img.h);
the result is ncnn_img.w=127, ncnn_img.h=127;
so, what the real size for backbone input node??
how can i deal with this problem?
PLEASE HELP!
The text was updated successfully, but these errors were encountered:
hi,professor: i fllow your code, and try convert to tensorrt code, i convert backbone net to onnx format ok, i use netron , then input size is ,but i debug your ncnn-macos code, i find code below: ncnn::Mat ncnn_img = ncnn::Mat::from_pixels(z_crop.data, ncnn::Mat::PIXEL_BGR2RGB, z_crop.cols, z_crop.rows); printf("ncnn_img.w:%d,ncnn_img.h:%d\n",ncnn_img.w,ncnn_img.h); the result is ncnn_img.w=127, ncnn_img.h=127; so, what the real size for backbone input node?? how can i deal with this problem? PLEASE HELP!
NCNN supports adaptive scale input. And the initial template size is [1,3,127,127], search template size is [1,3,255,255]
hi,professor:
i have success run NanoTrack on jetson tx2 with TensorRT accelerate! it very fast, fps almost 77fps~122fps!
but i test the dance_girl.mp4, i find the track not very good, always have lost and track error!
the result is same with ncnn version.
i upload the result video of dance_girl, please tell me why NanoTrack is poor performance, and how to make it more better?
hi,professor:
i fllow your code, and try convert to tensorrt code, i convert backbone net to onnx format ok, i use netron , then input size is
,but i debug your ncnn-macos code, i find code below:
ncnn::Mat ncnn_img = ncnn::Mat::from_pixels(z_crop.data, ncnn::Mat::PIXEL_BGR2RGB, z_crop.cols, z_crop.rows);
printf("ncnn_img.w:%d,ncnn_img.h:%d\n",ncnn_img.w,ncnn_img.h);
the result is ncnn_img.w=127, ncnn_img.h=127;
so, what the real size for backbone input node??
how can i deal with this problem?
PLEASE HELP!
The text was updated successfully, but these errors were encountered: