Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support CPU/GPU from the same binary? #203

Closed
emmenlau opened this issue Feb 7, 2020 · 7 comments
Closed

Support CPU/GPU from the same binary? #203

emmenlau opened this issue Feb 7, 2020 · 7 comments

Comments

@emmenlau
Copy link
Contributor

emmenlau commented Feb 7, 2020

It would be really awesome if the backend (CPU/GPU) could be selected at runtime. I think that is currently not possible? Could this be added?

@zeyiwen
Copy link
Collaborator

zeyiwen commented Feb 8, 2020

This is not easy to do. The reason is that the binary to run on CPUs is compiled with gcc, while that on GPUs is compiled with nvcc. It may be possible to set nvcc as the only compiler, but the potential problem is that users of the binary will need to have nvcc or CUDA environment locally.

One possible way to use one binary for both CPU and GPU is to implement ThunderSVM with OpenCL.

@emmenlau
Copy link
Contributor Author

emmenlau commented Feb 8, 2020

Thanks for the quick reply @zeyiwen ! I'm not sure I fully understand. Currently I can build thundersvm twice, once for CPU and once for GPU. Then I get two libraries that I can both link into the same executable. If their methods would be distinguishable (for example with separate namespaces) I could use them concurrently, no?

That is what I'm aiming at. Maybe there could be another, higher abstraction like thundersvm::train() over the current interface, that under the hood calls either thundersvm::cuda::train() or thundersvm::cpu::train() transparently depending on a runtime parameter?

@zeyiwen
Copy link
Collaborator

zeyiwen commented Feb 9, 2020

Your machine has the environment to run both versions of ThunderSVM. Some users don't have GPUs or the CUDA environment installed, where the CUDA code of ThunderSVM can't be compiled.

Our current implementation is to disable the CUDA code when compiling the binary for CPUs, and to disable some C++ code when compiling the binary for GPUs.

There could be a solution for users who have GPUs (like your case), where both the CPU version and GPU version are compiled and combined as one binary. However, one concern I have in this scenario is that, if users have GPUs, they would probably use GPUs. So having a CPU version for those users may not be too compelling.

@emmenlau
Copy link
Contributor Author

emmenlau commented Feb 9, 2020

Dear @zeyiwen I see that my question was not very clear! Sorry for that.

I did not mean that all users would need to have CUDA. I only mean that the decision between CUDA and CPU could be a runtime choice. Of course users without CUDA will only have the choice between CPU and CPU :-) But users with CUDA will be able to switch back and forth.

What is my motivation? The motivation is that CUDA is not as portable as the CPU mode. I would love to have a portable thundersvm-train executable. It should use CUDA if available, but fall back to CPU if CUDA is unavailable. Even better, let users decide between the two modes. Sometimes CPU can be faster than (for example) an older Laptop GPU. For us, thundersvm CPU mode is faster than GPU mode for small inferences. It would be nice to have a choice then.

Also, think about thundersvm included in Linux distributions. They probably would not want to recompile for each user specifically. It would be much more portable to have a single binary with multiple runtime options.

@zeyiwen
Copy link
Collaborator

zeyiwen commented Feb 9, 2020

Thanks @emmenlau I got your idea. This is doable. Let me label this issue as an enhancement of ThunderSVM. We will keep this in mind in the future upgrade.

@emmenlau
Copy link
Contributor Author

emmenlau commented Feb 9, 2020

Its really just an idea, but I hope it can help make thundersvm even more broadly applicable.

@emmenlau
Copy link
Contributor Author

I guess this can be closed now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants