Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH : add support for GPU accelerated classifiers #207

Open
mortonjt opened this issue May 17, 2021 · 1 comment
Open

ENH : add support for GPU accelerated classifiers #207

mortonjt opened this issue May 17, 2021 · 1 comment

Comments

@mortonjt
Copy link

mortonjt commented May 17, 2021

Improvement Description
It could be nice to had support for models that can be loaded on the GPU.
There are a number of libraries such as Pytorch / cuML / skorch that make this pretty easy to implement.

Current Behavior
All of the existing methods are CPU bound, which can become a bottleneck when performing many predictions.

Proposed Behavior
Many of the classifiers in this package already have GPU implementations in skorch / cuML. It would just be a matter of adding the appropriate dependencies / checks / flags to make sure that everything can run.

References

  1. Pytorch : https://pytorch.org/
  2. cuML : https://github.com/rapidsai/cuml
  3. skorch : https://github.com/skorch-dev/skorch
@valentynbez
Copy link
Contributor

I wonder if that's really needed. qiime2 is limited in that regard, and introducing pytorch or cuML as an additional huge dependency for the package defies the purpose.
Users unfamiliar with ML will use it on a low n of samples, advanced ML practitioners will start with scikit-learn or pytorch directly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants