You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
waveorder's simulations and reconstructions are moving to torch following the new models structure, and along the way we decided to temporarily drop GPU support in favor of prioritizing the migration.
We would like to restore GPU support for many of our operations, especially our heaviest reconstructions.
@ziw-liu, can you comment on the easiest path you see to GPU support?
The text was updated successfully, but these errors were encountered:
Conceptually if every operation is functional (as in torch.nn.functional) then the GPU switch won't even be necessary -- the computation will automatically happen on the device where the input tensor is stored on, and internally created tensors can use tensor(..., device=input.device). I don't think it will be hard (torch is a GPU-first library after all), we just need to carefully test and fix things.
Edit: simple example of an tensor-in-tensor-out function that is device-agnostic.
waveorder
's simulations and reconstructions are moving totorch
following the newmodels
structure, and along the way we decided to temporarily drop GPU support in favor of prioritizing the migration.We would like to restore GPU support for many of our operations, especially our heaviest reconstructions.
@ziw-liu, can you comment on the easiest path you see to GPU support?
The text was updated successfully, but these errors were encountered: