Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(machine-learning): support cuda 12 #7569

Merged
merged 4 commits into from
Mar 3, 2024
Merged

feat(machine-learning): support cuda 12 #7569

merged 4 commits into from
Mar 3, 2024

Conversation

martabal
Copy link
Member

@martabal martabal commented Mar 2, 2024

Add support for cuda 12:

  • Onnxruntime supports it since 1.17 (docs)
  • The default cuda version with ubuntu 23.10 is cuda 12.
  • Keep the default version to cuda 11

Copy link

cloudflare-pages bot commented Mar 2, 2024

Deploying with  Cloudflare Pages  Cloudflare Pages

Latest commit: 1a8ddde
Status: ✅  Deploy successful!
Preview URL: https://05e22c0c.immich.pages.dev
Branch Preview URL: https://feat-support-cuda12.immich.pages.dev

View logs

@mertalev
Copy link
Contributor

mertalev commented Mar 2, 2024

I'm not sure we actually need to keep 11. In practice, 525 is the minimum driver version for ONNX Runtime based on issues I've seen and the minimum compute capability for it is 5.2. I think anyone who's using CUDA right now is already in an environment where 12 will just work.

@martabal
Copy link
Member Author

martabal commented Mar 2, 2024

the original goal was only to support cuda 12 but if you want to move away from cuda 11 completely, I'm all in 😄

machine-learning/pyproject.toml Outdated Show resolved Hide resolved
@martabal
Copy link
Member Author

martabal commented Mar 2, 2024

What do you think about still supporting cuda 11 ?
It could be useful non-docker users so they still can install it with something like poetry install ... --with cuda11

@mertalev
Copy link
Contributor

mertalev commented Mar 2, 2024

I'm not sure if Poetry can handle different flavors of the same package since the optional dependencies are all squashed into the same lock file. It didn't work for cpu/gpu variants of PyTorch at least. Only one of them actually get resolved.

@mertalev
Copy link
Contributor

mertalev commented Mar 2, 2024

But that might have changed since I tried it. You can test and check if it can actually differentiate the cuda-11 and cuda-12 flavors

@martabal
Copy link
Member Author

martabal commented Mar 2, 2024

Yeah, you're right, we can't python-poetry/poetry#7748

@mertalev mertalev merged commit 8ce18b3 into main Mar 3, 2024
25 checks passed
@mertalev mertalev deleted the feat/support-cuda12 branch March 3, 2024 04:36
@OperKH OperKH mentioned this pull request Apr 6, 2024
3 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants