Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Triton Server Integration with DeepStream #47

Closed
Darshcg opened this issue Mar 30, 2021 · 11 comments
Closed

Triton Server Integration with DeepStream #47

Darshcg opened this issue Mar 30, 2021 · 11 comments

Comments

@Darshcg
Copy link

Darshcg commented Mar 30, 2021

Hi @marcoslucianops,

Thanks for your Projects. It helped me a lot honestly.
Actually, I have run the Yolov3 model(trained on my Custom Dataset) on Jetson Nano using DeepStream with 4 Cameras. Next, I want to Integrate Triton Server with DeepStream for the same model.
So, my doubts are:
1.) How to do the Integration, what all I need to do extra?
2.) Can I serve the TRT models with the triton Server integrated with DeepStream?

Thanks

@marcoslucianops
Copy link
Owner

Hi, I have no experience with Triton Server, sorry.

@Darshcg
Copy link
Author

Darshcg commented Mar 30, 2021

Hi @marcoslucianops,

Thanks for your reply.

One more doubt is how to run Inference on Two Models using DeepStream.
I want to run both Yolov3 and face Detection model using DeepStream. What are the steps to be followed?

Face Detection model used: https://github.com/biubug6/Face-Detector-1MB-with-landmark

I followed https://github.com/marcoslucianops/DeepStream-Yolo/blob/master/multipleInferences.md, but I have two Different Model(one is Yolov3 and Face Detection model).

Thanks,
Darshan

@Darshcg
Copy link
Author

Darshcg commented Apr 1, 2021

Looking forward to your reply.

@marcoslucianops
Copy link
Owner

Your Face Detection model is a caffe model or TensorRT converted model?

@Darshcg
Copy link
Author

Darshcg commented Apr 1, 2021

Thanks for your reply, it is TensorRT Converted model

@marcoslucianops
Copy link
Owner

marcoslucianops commented Apr 1, 2021

Can you send me your Face Detection model config_infer.txt file?

@Darshcg
Copy link
Author

Darshcg commented Apr 6, 2021

Hi @marcoslucianops, below is the config_infer.txt file for my FD Model

[property]
gpu-id=0
net-scale-factor=0.0039215697906911373
model-color-format=0
model-engine-file=model.engine
labelfile-path=sgie/labels.txt
batch-size=1
network-mode=2
num-detected-classes=1
interval=0
gie-unique-id=2
process-mode=2
network-type=0
cluster-mode=2
maintain-aspect-ratio=1
parse-bbox-func-name=NvDsInferParseCustomCenterNetFace
custom-lib-path=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_infercustomparser_centernet.so

[class-attrs-all]
pre-cluster-threshold=0.3
threshold=0.7

And Below is my Deepstream_app_config.txt

[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1

[tiled-display]
enable=1
rows=2
columns=2
width=1280
height=720
gpu-id=0
nvbuf-memory-type=0

[source0]
enable=1
type=1
num-sources=1
gpu-id=0
camera-width=640
camera-height=480
camera-fps-n=20
camera-fps-d=1
camera-v4l2-dev-node=0
cudadec-memtype=0

[sink0]
enable=1
type=2
sync=0
source-id=0
gpu-id=0
nvbuf-memory-type=0

[osd]
enable=1
gpu-id=0
border-width=1
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Serif
show-clock=0
clock-x-offset=800
clock-y-offset=820
clock-text-size=12
clock-color=1;0;0;0
nvbuf-memory-type=0

[streammux]
gpu-id=0
live-source=1
batch-size=1
batched-push-timeout=40000
width=1920
height=1080
enable-padding=0
nvbuf-memory-type=0

[primary-gie]
enable=1
batch-size=1
gpu-id=0
gie-unique-id=1
nvbuf-memory-type=0
config-file=config_infer_primary.txt

[secondary-gie0]
enable=1
batch-size=1
gpu-id=0
gie-unique-id=2
nvbuf-memory-type=0
config-file=centerface.txt

[tests]
file-loop=0

@marcoslucianops
Copy link
Owner

add to [secondary-gie0] in deepstream_app_config.txt and [property] in centerface.txt

operate-on-gie-id=1
# class ids you want to operate: 1, 1;2, 2;3;4, 3 etc
operate-on-class-ids=0

@Darshcg
Copy link
Author

Darshcg commented Apr 7, 2021

Added it, and worked Perfectly!
Thank you for your help

@Darshcg
Copy link
Author

Darshcg commented Apr 7, 2021

@marcoslucianops,

When I am inferencing Yolov3 using Deepstream with Triton Server on Jetson nano, the detections are going wrong(as shown in te image below)
IMG_20210406_105918

I don't know what's going wrong. But when I am using Local Deepstream, I am getting the correct detections.

Can you please help me to resolve the issue?

Thanks,
Darshan

@marcoslucianops
Copy link
Owner

I think it's a problem/bug on Triton

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants