You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the .to() method of analog layers is not fully functional. At the moment the recommended way of moving layers to GPU is via .cuda() directly. Ideally, we should support:
moving the layers back to cpu
seamless usage of .to() with both devices
Proposed solution
The .apply() and ._apply() methods used internally in torch for that purposes are likely to make the implementation tricky, as they are meant to recursively operate only on the layer Parameters and Buffers only. We should evaluate whether it is feasible to fully tackle it without resorting to turning the Tile into a Tensor-like structure (which is likely desirable, but longer term) - as a first stage, focusing on AnalogSequential, where we have more control over the recursion, can be an option.
Alternatives and other information
The text was updated successfully, but these errors were encountered:
Description and motivation
Currently, the
.to()
method of analog layers is not fully functional. At the moment the recommended way of moving layers to GPU is via.cuda()
directly. Ideally, we should support:cpu
.to()
with both devicesProposed solution
The
.apply()
and._apply()
methods used internally in torch for that purposes are likely to make the implementation tricky, as they are meant to recursively operate only on the layer Parameters and Buffers only. We should evaluate whether it is feasible to fully tackle it without resorting to turning theTile
into a Tensor-like structure (which is likely desirable, but longer term) - as a first stage, focusing onAnalogSequential
, where we have more control over the recursion, can be an option.Alternatives and other information
The text was updated successfully, but these errors were encountered: