You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello.
I succesfully installed this lib. Thanks for your recomendation.
So i tested simple demo and made my own Tensorrt pth file(checkpoints).
My model is customized mask_rcnn_r50_fpn_fp16_1X_coco.
First image demo inference test was succesfull, made trt pth output. But my model is Mask_rcnn.
And my output was only b-box withount segment mask.
So i add parameter enable mask=True
Then i found this Error
mask mode require len(output_names)==5 but get output_names=['num_detections', 'boxes', 'scores', 'classes']
/home/jwyng2000/PycharmProjects/pythonProject/newtst/mmdetection/mmdet/models/dense_heads/anchor_head.py:123: UserWarning: DeprecationWarning: anchor_generator is deprecated, please use "prior_generator" instead
warnings.warn('DeprecationWarning: anchor_generator is deprecated, '
/home/jwyng2000/PycharmProjects/pythonProject/newtst/mmdetection/mmdet/core/anchor/anchor_generator.py:369: UserWarning: ``single_level_grid_anchors`` would be deprecated soon. Please use ``single_level_grid_priors``
warnings.warn(
[TensorRT] INFO: [MemUsageChange] Init CUDA: CPU +521, GPU +0, now: CPU 3797, GPU 3253 (MiB)
Warning: Encountered known unsupported method torch.Tensor.new_tensor
Warning: Encountered known unsupported method torch.Tensor.new_tensor
[TensorRT] WARNING: IElementWiseLayer with inputs (Unnamed Layer* 1237) [ElementWise]_output and (Unnamed Layer* 1241) [Shuffle]_output: first input has type Float but second input has type Int32.
[TensorRT] WARNING: IElementWiseLayer with inputs (Unnamed Layer* 1246) [ElementWise]_output and (Unnamed Layer* 1250) [Shuffle]_output: first input has type Float but second input has type Int32.
[TensorRT] WARNING: IElementWiseLayer with inputs (Unnamed Layer* 1255) [ElementWise]_output and (Unnamed Layer* 1259) [Shuffle]_output: first input has type Float but second input has type Int32.
[TensorRT] WARNING: IElementWiseLayer with inputs (Unnamed Layer* 1264) [ElementWise]_output and (Unnamed Layer* 1268) [Shuffle]_output: first input has type Float but second input has type Int32.
[TensorRT] INFO: [MemUsageSnapshot] Builder begin: CPU 3992 MiB, GPU 2231 MiB
[TensorRT] WARNING: TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 11.3.0
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +95, GPU +264, now: CPU 4177, GPU 2495 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuDNN: CPU +127, GPU +58, now: CPU 4304, GPU 2553 (MiB)
[TensorRT] WARNING: Detected invalid timing cache, setup a local cache instead
[TensorRT] INFO: Some tactics do not have sufficient workspace memory to run. Increasing workspace size may increase performance, please check verbose output.
[TensorRT] INFO: Detected 1 inputs and 5 output network tensors.
[TensorRT] INFO: Total Host Persistent Memory: 256352
[TensorRT] INFO: Total Device Persistent Memory: 92233216
[TensorRT] INFO: Total Scratch Memory: 401408000
[TensorRT] INFO: [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 139 MiB, GPU 4 MiB
[TensorRT] WARNING: TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 11.3.0
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 5457, GPU 3273 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuDNN: CPU +1, GPU +10, now: CPU 5458, GPU 3283 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 5457, GPU 3267 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 5457, GPU 3251 (MiB)
[TensorRT] INFO: [MemUsageSnapshot] Builder end: CPU 5456 MiB, GPU 3251 MiB
[TensorRT] INFO: [MemUsageSnapshot] ExecutionContext creation begin: CPU 5456 MiB, GPU 3251 MiB
[TensorRT] WARNING: TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 11.3.0
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 5456, GPU 3259 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuDNN: CPU +1, GPU +8, now: CPU 5457, GPU 3267 (MiB)
[TensorRT] INFO: [MemUsageSnapshot] ExecutionContext creation end: CPU 5457 MiB, GPU 5061 MiB
[TensorRT] WARNING: The logger passed into createInferRuntime differs from one already provided for an existing builder, runtime, or refitter. TensorRT maintains only a single logger pointer at any given time, so the existing value, which can be retrieved with getLogger(), will be used instead. In order to use a new logger, first destroy all existing builder, runner or refitter objects.
[TensorRT] INFO: [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 5636, GPU 5061 (MiB)
[TensorRT] INFO: Loaded engine size: 180 MB
[TensorRT] INFO: [MemUsageSnapshot] deserializeCudaEngine begin: CPU 5636 MiB, GPU 5061 MiB
[TensorRT] WARNING: TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 11.3.0
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 5639, GPU 5249 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 5639, GPU 5257 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 5639, GPU 5241 (MiB)
[TensorRT] INFO: [MemUsageSnapshot] deserializeCudaEngine end: CPU 5639 MiB, GPU 5241 MiB
[TensorRT] INFO: [MemUsageSnapshot] ExecutionContext creation begin: CPU 5639 MiB, GPU 5241 MiB
[TensorRT] WARNING: TensorRT was linked against cuBLAS/cuBLAS LT 11.5.1 but loaded cuBLAS/cuBLAS LT 11.3.0
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +8, now: CPU 5639, GPU 5249 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuDNN: CPU +1, GPU +8, now: CPU 5640, GPU 5257 (MiB)
[TensorRT] INFO: [MemUsageSnapshot] ExecutionContext creation end: CPU 5640 MiB, GPU 7051 MiB
Can not load dataset from config. Use default CLASSES instead.
/home/jwyng2000/PycharmProjects/pythonProject/newtst/mmdetection/mmdet/datasets/utils.py:66: UserWarning: "ImageToTensor" pipeline is replaced by "DefaultFormatBundle" for batch inference. It is recommended to manually replace it in the test data pipeline in your config file.
warnings.warn(
Traceback (most recent call last):
File "inference.py", line 59, in <module>
main()
File "inference.py", line 48, in main
result = inference_detector(trt_detector, image_path)
File "/home/jwyng2000/PycharmProjects/pythonProject/newtst/mmdetection/mmdet/apis/inference.py", line 151, in inference_detector
results = model(return_loss=False, rescale=True, **data)
File "/home/jwyng2000/PycharmProjects/pythonProject/newtst/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jwyng2000/PycharmProjects/toymmd/mmdetection/mmdetection-to-tensorrt/mmdet2trt/apis/inference.py", line 188, in forward
segms_results = FCNMaskHead.get_seg_masks(
File "/home/jwyng2000/PycharmProjects/pythonProject/newtst/mmdetection/mmdet/models/roi_heads/mask_heads/fcn_mask_head.py", line 293, in get_seg_masks
masks_chunk, spatial_inds = _do_paste_mask(
File "/home/jwyng2000/PycharmProjects/pythonProject/newtst/mmdetection/mmdet/models/roi_heads/mask_heads/fcn_mask_head.py", line 384, in _do_paste_mask
x0, y0, x1, y1 = torch.split(boxes, 1, dim=1) # each is Nx1
File "/home/jwyng2000/PycharmProjects/pythonProject/newtst/venv/lib/python3.8/site-packages/torch/functional.py", line 156, in split
return tensor.split(split_size_or_sections, dim)
File "/home/jwyng2000/PycharmProjects/pythonProject/newtst/venv/lib/python3.8/site-packages/torch/_tensor.py", line 510, in split
return super(Tensor, self).split(split_size, dim)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 5459, GPU 6977 (MiB)
[TensorRT] INFO: [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +0, GPU +0, now: CPU 5366, GPU 4987 (MiB)
python inference.py ../00000010.jpg ../../configs/mask_rcnn/mask_rcnn_r50_fpn_fp16_1x_coco.py ../../checkpoints/epoch_72"(origin)".pth ../../checkpoints/epoch_72"(origin)"_trt1.pth
how can i fix this.
I would be very appreciate if you give answer. Thank you
The text was updated successfully, but these errors were encountered:
Hello.
I succesfully installed this lib. Thanks for your recomendation.
So i tested simple demo and made my own Tensorrt pth file(checkpoints).
My model is customized mask_rcnn_r50_fpn_fp16_1X_coco.
First image demo inference test was succesfull, made trt pth output. But my model is Mask_rcnn.
And my output was only b-box withount segment mask.
So i add parameter
enable mask=True
Then i found this Error
My code is this
My comand
python inference.py ../00000010.jpg ../../configs/mask_rcnn/mask_rcnn_r50_fpn_fp16_1x_coco.py ../../checkpoints/epoch_72"(origin)".pth ../../checkpoints/epoch_72"(origin)"_trt1.pth
how can i fix this.
I would be very appreciate if you give answer. Thank you
The text was updated successfully, but these errors were encountered: