Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Integrating Nvidia's Transfer Learning Toolkit (Feature Request/ Question) #546

Closed
marvision-ai opened this issue Apr 19, 2020 · 3 comments

Comments

@marvision-ai
Copy link

Hello @dusty-nv !

Fantastic repo. Been following it for a few years now since DIGITS was first introduced.
I saw in another issue that you are integrating a Pytorch SSD detector into the repo. Can I ask why there is no push to integrate the TLT package from nvidia into this repo?

They support the following:

Image Classification
ResNet10/18/50
VGG16/19
MobileNet V1/V2
AlexNet
SqueezeNet
GoogLeNet
 
Faster RCNN supporting backbones:

ResNet10/18/50
VGG16/19
GoogLeNet
MobileNet V1/V2
Object Detection
DetectNet_v2 supporting backbones:

ResNet10/18/50
VGG 16/19
GoogLeNet
MobileNet V1/V2
 
SSD:

ResNet10/18

I hear they plan to introduce maskRCNN and yolov3 as well.

If one could use that toolkit for training and then use this repository for doing inference with the models, that would make for an amazing combination!

TLT training --> export to tensorRT engine --> Jetson Inference (classifcation/detection/segmentation)

The toolkit already comes with a model optimizer and converter to a tensorRT engine for the jetsons. All we need is a way to load the model and run inference realtime with your python/c++ jetson inference stack.

I hope this can someday be a reality. It would really be great to make sure all software with this repo and other models stay relevent throughtout the years and has support from Nvidia as a whole.

Thank you for all your hard work!

@dusty-nv
Copy link
Owner

Hi @mbufi, thanks for your feedback - I will have to take a look at integrating TLT workflow. For now, I have been aiming to get PyTorch re-training of SSD object detection, as it can be run onboard the Jetson (for those who may not have access to training PC/server). The TLT container runs on x86, however it should not be a problem to run the TensorRT engine from it.

@marvision-ai
Copy link
Author

@dusty-nv great to hear. Thank you for that.

I totally understand why you are integrating the ssd model for people without a dedicated x86 computer.

I will leave the issue open to inquire about any updates in the future with regards to your progress with TLT if you decide to integrate it.

Cheers

@marvision-ai
Copy link
Author

@dusty-nv I just watched the new video posted on Nvidias youtube about the ketson IOT platform. Any updates on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants