Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spconv 2.2 Development #380

Closed
3 of 8 tasks
FindDefinition opened this issue Nov 16, 2021 · 7 comments
Closed
3 of 8 tasks

Spconv 2.2 Development #380

FindDefinition opened this issue Nov 16, 2021 · 7 comments
Labels

Comments

@FindDefinition
Copy link
Collaborator

FindDefinition commented Nov 16, 2021

Spconv 2.2 development is started. Currently I don't have enough time to develop 2.2 and can't determine a release date.

Spconv 2.2 Core Features

  • (cumm feature) TensorFormat32 support
  • (cumm feature) Optimize CUDA Kernels for small-channel-size layers
  • All algo will use same weight layout (KRSC)
  • Ampere Feature support
  • nvrtc support (JIT kernel, we can provide a ConvKernel instance in runtime)

Spconv 2.2 Misc Features

  • Add more example based on datasets in torch-points3d
  • More and better tests, we can test results between multiple implementations. For example, compare CPU and CUDA results.
  • support larger volume of spatial shape (2^31 -> 2^63)

[2021-11-26] Performance issues have been found in RTX 3090/A100 GPUs (they are slower than my 3080 Laptop...). So the int8 task/better tune is moved to spconv 2.3, nvrtc/ampere feature support move to spconv 2.2.
[2022-01-04] spconv 2.2 has been delayed due to busy work and another important project (next-generation framework to speed up develop of all projects include spconv)
[2022-01-20] spconv 2.2 release date: 2022.2.10
If you wants a new feature that can be easily implemented, you can leave a comment for your feature request. (no guarantee to implement)

@FindDefinition FindDefinition pinned this issue Nov 16, 2021
@Fingolfin007
Copy link

Could you clarify what's the level of support for JIT with Spconv 2.2? I intend to train a model in python, but run inference with jit in c++. From what I saw on the Readme, currently spconv 2.x does not support jit. Does it mean jit will be supported in 2.2?

@FindDefinition
Copy link
Collaborator Author

@Fingolfin007

  1. torch.jit won't be supported because we want to stay independent with pytorch binary. currently our c++ part of code don't contain any pytorch content. If we want to support torch.jit, we can't use pip install anymore and must build different package for every major release of pytorch.
  2. the tensorrt support is on our roadmap, but it needs much time to develop. I'm not a full-time developer of spconv, so I may need several months to add tensorrt support.

@wyjforwjy
Copy link

hi,How is the progress of spconv2.2?Looking forward to your great work.

@Asthestarsfalll
Copy link

Spconv 2.x does not support jit,so what inference framework does spconv support?

@sissini
Copy link

sissini commented Feb 23, 2022

Spconv 2.x does not support jit,so what inference framework does spconv support?

same confused about jit. Does it means cannnot convert pytorch to onnx without support of jit ? can anyone give some suggestion the way to deploy spconv? directly add plugin op in torch2tensorrt or any way else? holp for some suggestions.

@Bitfultea
Copy link

Hi @sissini, have u managed to find a solution to deploy the spconv 2.x ?

@sandeepnmenon
Copy link

torch2tensorrt

Does this method work? Can we directly write a custom plugin in torch2tensorrt for the sparse convolutions?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants