Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

easynmt/api:2.0-cuda11.1 doesn't seem to use my GPU #32

Closed
JohnWinner opened this issue May 14, 2021 · 2 comments
Closed

easynmt/api:2.0-cuda11.1 doesn't seem to use my GPU #32

JohnWinner opened this issue May 14, 2021 · 2 comments

Comments

@JohnWinner
Copy link

Just got a new CentOS 7 server with a GeForce RTX 2080 GPU...

  • using "nvidia-smi" I could see my card
  • installed docker
  • started new easynmt instance with "docker run -d --name easynmt --restart always -v /easynmt_cache:/cache -p 24080:80 easynmt/api:2.0-cuda11.1"

Using the API the translation was very slow so I tried the following:

  • docker exec -it easynmt bash
  • installed curl
  • downloaded the test_translation_speed.py
  • ran "python translation_speed.py opus-mt"
  • I only got 5.29 sentences/s
  • I tried the following: torch.cuda.is_available() and it returned False

Correct me if I am wrong but it seems like my easynmt instance doesn't use the GPU? Did I do something wrong?

Thank you for your help...

@nreimers
Copy link
Member

You must setup docker so that it can access the GPUs
https://docs.docker.com/config/containers/resource_constraints/#gpu

@JohnWinner
Copy link
Author

Worked!
I got 46.93 sentences/second! Super!
Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants