Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support GPUs via torch #144

Open
talonchandler opened this issue Jul 26, 2023 · 1 comment
Open

Support GPUs via torch #144

talonchandler opened this issue Jul 26, 2023 · 1 comment
Labels
enhancement New feature or request GPU Accelerated compute devices

Comments

@talonchandler
Copy link
Collaborator

waveorder's simulations and reconstructions are moving to torch following the new models structure, and along the way we decided to temporarily drop GPU support in favor of prioritizing the migration.

We would like to restore GPU support for many of our operations, especially our heaviest reconstructions.

@ziw-liu, can you comment on the easiest path you see to GPU support?

@ziw-liu
Copy link
Contributor

ziw-liu commented Jul 26, 2023

Conceptually if every operation is functional (as in torch.nn.functional) then the GPU switch won't even be necessary -- the computation will automatically happen on the device where the input tensor is stored on, and internally created tensors can use tensor(..., device=input.device). I don't think it will be hard (torch is a GPU-first library after all), we just need to carefully test and fix things.

Edit: simple example of an tensor-in-tensor-out function that is device-agnostic.

@ziw-liu ziw-liu added GPU Accelerated compute devices enhancement New feature or request labels Sep 21, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request GPU Accelerated compute devices
Projects
None yet
Development

No branches or pull requests

2 participants