Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cuda11.8 and Tensorflow in a docker container to use - cannot install old cudas on a server #962

Open
axeljerabek opened this issue Aug 25, 2023 · 0 comments
Labels
enhancement New feature or request
Projects

Comments

@axeljerabek
Copy link

Describe the feature you'd like to request

Many servers on the internet are most updated, running the latest OS and the latest nvidia drivers (ie 535). It is not possible to install an old cuda11.8 on a 535 nvidia driver.
If the recognize app could make use of a dockered cuda11.8 and tensorflow in a docker container, it would have some advantages:

  1. an older cuda11.8 could be used, so everyone can use the recognize app with GPU accelleration
  2. it would be easier to pull a ready docker container than to install cuda and nvidia drivers manually for most (can be painful also)
  3. maintenance of the container would be a lot easier than answering all those questions why the GPU part of your software does not work.
    Hope this works someday. Best greetings :)

Describe the solution you'd like

Recognize GPU parts in a docker container.

Describe alternatives you've considered

There are none.

@axeljerabek axeljerabek added the enhancement New feature or request label Aug 25, 2023
@github-actions github-actions bot added this to Backlog in Recognize Aug 25, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Recognize
Backlog
Development

No branches or pull requests

1 participant