Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

peopleSegNetV2 output is black #38

Closed
Edu4444 opened this issue Jun 11, 2021 · 3 comments
Closed

peopleSegNetV2 output is black #38

Edu4444 opened this issue Jun 11, 2021 · 3 comments

Comments

@Edu4444
Copy link

Edu4444 commented Jun 11, 2021

Hello. I have tested peopleSegNetV2 and peopleSegNet on a jetson Nano and on a jetson TX2.
I used a .jpg and a .h264 files from deepstream samples and as output I got black images and black videos without any detection.

I have TensorRT OSS plugin installed, deepstream 5.1, TensorRT 7.1.3, jetpack 4.5.1.
Using updated github files.
Deepstream sample apps works properly on the devices.

mero@Jetson-HC02:~/deepstream_tlt_apps$ export SHOW_MASK=1
mero@Jetson-HC02:~/deepstream_tlt_apps$ ./apps/tlt_segmentation/ds-tlt-segmentation -c configs/peopleSegNet_tlt/pgie_peopleSegNetv2_tlt_config.txt -i /opt/nvidia/deepstream/deepstream-5.1/samples/streams/sample_720p.jpg
Now playing: configs/peopleSegNet_tlt/pgie_peopleSegNetv2_tlt_config.txt
Opening in BLOCKING MODE
Opening in BLOCKING MODE 
0:00:24.214130103 10905   0x557a39d640 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::deserializeEngineAndBackend() <nvdsinfer_context_impl.cpp:1702> [UID = 1]: deserialized trt engine from :/home/mero/deepstream_tlt_apps/models/peopleSegNet/peopleSegNetV2_resnet50.etlt_b1_gpu0_fp16.engine
INFO: [Implicit Engine Info]: layers num: 3
0   INPUT  kFLOAT Input           3x576x960       
1   OUTPUT kFLOAT generate_detections 100x6           
2   OUTPUT kFLOAT mask_fcn_logits/BiasAdd 100x2x28x28     

0:00:24.214316410 10905   0x557a39d640 INFO                 nvinfer gstnvinfer.cpp:619:gst_nvinfer_logger:<primary-nvinference-engine> NvDsInferContext[UID 1]: Info from NvDsInferContextImpl::generateBackendContext() <nvdsinfer_context_impl.cpp:1806> [UID = 1]: Use deserialized engine model: /home/mero/deepstream_tlt_apps/models/peopleSegNet/peopleSegNetV2_resnet50.etlt_b1_gpu0_fp16.engine
0:00:24.334049448 10905   0x557a39d640 INFO                 nvinfer gstnvinfer_impl.cpp:313:notifyLoadModelStatus:<primary-nvinference-engine> [UID 1]: Load new model:configs/peopleSegNet_tlt/pgie_peopleSegNetv2_tlt_config.txt sucessfully
Running...
NvMMLiteBlockCreate : Block : BlockType = 256 
[JPEG Decode] BeginSequence Display WidthxHeight 1280x720
in videoconvert caps = video/x-raw(memory:NVMM), format=(string)RGBA, framerate=(fraction)1/1, width=(int)1280, height=(int)720
End of stream
Returned, stopping playback
[JPEG Decode] NvMMLiteJPEGDecBlockPrivateClose done
[JPEG Decode] NvMMLiteJPEGDecBlockClose done
Deleting pipeline
@ALittleBug
Copy link

PeopleSegNet is a instance segmentation model, so you need to run the model via ./apps/tlt_detection/ds-tlt-detection, refer following command to see if issue persist?
[SHOW_MASK=1] ./apps/tlt_detection/ds-tlt-detection -c configs/frcnn_tlt/pgie_frcnn_tlt_config.txt -i $DS_SRC_PATH/samples/streams/sample_720p.h264

@Edu4444
Copy link
Author

Edu4444 commented Jun 15, 2021

PeopleSegNet is a instance segmentation model, so you need to run the model via ./apps/tlt_detection/ds-tlt-detection, refer following command to see if issue persist?
[SHOW_MASK=1] ./apps/tlt_detection/ds-tlt-detection -c configs/frcnn_tlt/pgie_frcnn_tlt_config.txt -i $DS_SRC_PATH/samples/streams/sample_720p.h264

Thank you.
Now it's working as expected.

@Edu4444 Edu4444 closed this as completed Jun 15, 2021
@pra-dan
Copy link

pra-dan commented Oct 11, 2021

Can you both kindly update the solution for the newly added TAO examples. I am trying to infer Mask-RCNN uff model & engine and infer using the tao-deepstream-segmentation sample. All I get it dark masks. I even tried the sample on the detection sample and got the input image as output but with a changed size, nothing else.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants