Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPU usage #1000

Closed
wangjiawen2013 opened this issue Mar 11, 2021 · 2 comments · Fixed by #1001
Closed

CPU usage #1000

wangjiawen2013 opened this issue Mar 11, 2021 · 2 comments · Fixed by #1001
Assignees
Labels

Comments

@wangjiawen2013
Copy link

wangjiawen2013 commented Mar 11, 2021

Hi,
I trained a model (70000 cells and 4000 genes) on my server, I found that it used all the CPUs(4 physical cores and 64 logical processors) available by default. How to limit the computation resource it occupied ?

here are info of my server:
LSB Version: :core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
Distributor ID: CentOS
Description: CentOS Linux release 7.5.1804 (Core)
Release: 7.5.1804
Codename: Core

64 Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz

@adamgayoso
Copy link
Member

You'll want to use this:

https://pytorch.org/docs/stable/generated/torch.set_num_threads.html

On that note, we should provide a convenience wrapper of this in scvi.settings

@wangjiawen2013
Copy link
Author

Thanks for your help ! It works well.
And I find an interesting thing. It ran faster using only 20 threads (set torch.set_num_threads(20) explicitly) than all the threads (withouting setting torch.set_num_threads, all the 64 threads by default in my case).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants