Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jetson boards support #489

Closed
alikaz3mi opened this issue Dec 8, 2021 · 8 comments
Closed

Jetson boards support #489

alikaz3mi opened this issue Dec 8, 2021 · 8 comments
Assignees
Labels
enhancement New feature or request

Comments

@alikaz3mi
Copy link

alikaz3mi commented Dec 8, 2021

Hello. Thank you for your valuable solutions. I am interested to know the optimized yolov5 models can be deployed on Jetson boards? Are they compatible with Jetson CPU architecture?

@alikaz3mi alikaz3mi added the enhancement New feature or request label Dec 8, 2021
@markurtz
Copy link
Member

Hi @alikaz3mi thanks for the question! This is tied to #364 and we expect to have a pathway enabled for this through TensorRT for our 0.10 release from mid to end January.

Thanks,
Mark

@markurtz
Copy link
Member

Hi @alikaz3mi, the TensorRT integration has grown in scope quite a bit from what we initially thought was a small issue. This is now targeted for our 0.11 release which will be coming out in late February. If we push anything to our nightly before then, will update here.

Thanks,
Mark

@vjsrinivas
Copy link

@markurtz I noticed the nightly build is out for 0.11. Does it contain any TensorRT or channel-wise pruning? Also when should we expect 0.11 to be on the main branch? :)

Thank you,
Vijay Rajagopal

@markurtz
Copy link
Member

markurtz commented Mar 3, 2022

Hi @vjsrinivas unfortunately TensorRT is still running behind due to some slowdowns for creating quantized BERT models for our engine. It is now at the top of the list for our 0.12 work and we expect to have some initial pathways in roughly the next 2 to 3 weeks.

For channel pruning, this is available on sparseml-nightly and main located here (0.11 release is pending QA currently and we hope to have it out next week): https://github.com/neuralmagic/sparseml/blob/main/src/sparseml/pytorch/sparsification/pruning/modifier_pruning_structured.py#L284

We're working on documentation for this, but let us know if you have any questions on it in the meantime and we can help get you up and running with channel pruning!

@vjsrinivas
Copy link

vjsrinivas commented Mar 15, 2022

Hey @markurtz, I've been working on getting structured pruning working on ResNet50. The pruning process is fine (still adjusting hyperparameters - the accuracy dropoff is too much), but is there an easy way to remove zero'ed out channels in PyTorch? I'm wanting to see the inference speed differences right now.

EDIT: Well, I figured out how to remove the zero out_channels and recreate a new Conv2D with the new pruned shape. But since the output shape will be changed, the rest of the network will need restructuring... Not sure if there is an easy way to do it.

@markurtz
Copy link
Member

Hey @vjsrinivas, we do have this enabled for models like ResNet-50 to automatically thin the network. Specifically, we have a LayerThinningModifier that can be used.

I've attached an example implementation of that for ResNet-50. Let us know if you need any support or run into any issues (the dependency graph generation can be tricky). Generally, we've seen 40% filter pruning being at the upper limit for ResNet-50 before it starts degrading in accuracy.

resnet50-structured.yaml.zip

@jeanniefinks
Copy link
Member

Hello @vjsrinivas

As it has been a few weeks here with no further comments (and we see your issue #623), I am going to go ahead and close this comment. Please re-open if you have a follow-up. Also, I invite you to "star" our SparseML repo if you like and have not already! We always like seeing community support!
https://github.com/neuralmagic/sparseml/

Thank you!
Jeannie / Neural Magic

@IamShubhamGupto
Copy link

@markurtz any update for the jetson support?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants