torch.cat without copying memory #70600
Labels
module: numpy
Related to numpy support, and also numpy compatibility of our operators
module: viewing and reshaping
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
馃殌 The feature, motivation and pitch
Principle
Today, the concatenation implemented on pytorch consists in the allocation of a new tensor.
I would like to know if it is possible to realize a concatenation of contiguous and/or non-contiguous tensors without memory duplication.
Example 1 : contiguous concatenation
The following code is executed with allocation of a new tensor
concatenated_tensor
:I'd like to enable the same scenario, but have
concatenated_tensor
as a view oftensor1
andtensor2
.In terms of UX, I don't know what to propose.
Note: since I'm a new pytorch user, maybe the word "view" is not appropriate. The low-level idea is to consider
concatenated_tensor
as a list of pointers to tensors.Example 2 : non-contiguous concatenation
Next, I would like to enable the following scenario, if possible:
Alternatives
No response
Additional context
Discussed in #70283 with @ejguan.
See discussion/34609.
cc @mruberry @rgommers
The text was updated successfully, but these errors were encountered: