You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fantastic repo. Been following it for a few years now since DIGITS was first introduced.
I saw in another issue that you are integrating a Pytorch SSD detector into the repo. Can I ask why there is no push to integrate the TLT package from nvidia into this repo?
I hear they plan to introduce maskRCNN and yolov3 as well.
If one could use that toolkit for training and then use this repository for doing inference with the models, that would make for an amazing combination!
TLT training --> export to tensorRT engine --> Jetson Inference (classifcation/detection/segmentation)
The toolkit already comes with a model optimizer and converter to a tensorRT engine for the jetsons. All we need is a way to load the model and run inference realtime with your python/c++ jetson inference stack.
I hope this can someday be a reality. It would really be great to make sure all software with this repo and other models stay relevent throughtout the years and has support from Nvidia as a whole.
Thank you for all your hard work!
The text was updated successfully, but these errors were encountered:
Hi @mbufi, thanks for your feedback - I will have to take a look at integrating TLT workflow. For now, I have been aiming to get PyTorch re-training of SSD object detection, as it can be run onboard the Jetson (for those who may not have access to training PC/server). The TLT container runs on x86, however it should not be a problem to run the TensorRT engine from it.
Hello @dusty-nv !
Fantastic repo. Been following it for a few years now since DIGITS was first introduced.
I saw in another issue that you are integrating a Pytorch SSD detector into the repo. Can I ask why there is no push to integrate the TLT package from nvidia into this repo?
They support the following:
I hear they plan to introduce maskRCNN and yolov3 as well.
If one could use that toolkit for training and then use this repository for doing inference with the models, that would make for an amazing combination!
TLT training --> export to tensorRT engine --> Jetson Inference (classifcation/detection/segmentation)
The toolkit already comes with a model optimizer and converter to a tensorRT engine for the jetsons. All we need is a way to load the model and run inference realtime with your python/c++ jetson inference stack.
I hope this can someday be a reality. It would really be great to make sure all software with this repo and other models stay relevent throughtout the years and has support from Nvidia as a whole.
Thank you for all your hard work!
The text was updated successfully, but these errors were encountered: