Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow .to() usage for analog layers #140

Closed
diego-plan9 opened this issue Feb 25, 2021 · 1 comment
Closed

Allow .to() usage for analog layers #140

diego-plan9 opened this issue Feb 25, 2021 · 1 comment
Labels
enhancement New feature or request torch Features related to the torch integration

Comments

@diego-plan9
Copy link
Member

Description and motivation

Currently, the .to() method of analog layers is not fully functional. At the moment the recommended way of moving layers to GPU is via .cuda() directly. Ideally, we should support:

  • moving the layers back to cpu
  • seamless usage of .to() with both devices

Proposed solution

The .apply() and ._apply() methods used internally in torch for that purposes are likely to make the implementation tricky, as they are meant to recursively operate only on the layer Parameters and Buffers only. We should evaluate whether it is feasible to fully tackle it without resorting to turning the Tile into a Tensor-like structure (which is likely desirable, but longer term) - as a first stage, focusing on AnalogSequential, where we have more control over the recursion, can be an option.

Alternatives and other information

@diego-plan9 diego-plan9 added enhancement New feature or request torch Features related to the torch integration labels Feb 25, 2021
@maljoras
Copy link
Collaborator

This should be possible by now. We can open it again if necessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request torch Features related to the torch integration
Projects
None yet
Development

No branches or pull requests

2 participants