Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature request] Implement torch.stack() / _cat() for also for torch.HalfTensor #6968

Closed
timothygebhard opened this issue Apr 25, 2018 · 7 comments
Labels
todo Not as important as medium or high priority tasks, but we will work on these.

Comments

@timothygebhard
Copy link

Issue Description

It seems that stacking is currently (PyTorch 0.4) is not supported for tensors of type HalfTensor; see code example.

I don't know if this is intentional for some reason (hence I didn't label this issue as a bug), but I couldn't find any issue or documentation pointing to that. Since some methods, like torch.utils.data.DataLoader, depend on torch.stack() (in this case implicitly through the default_collate() method), it would be great if either stacking for HalfTensors could get implemented, or the documentation could be updated to contain some kind of hint.

Code Example

Python 3.6.5 (default, Mar 30 2018, 06:41:49)
Type 'copyright', 'credits' or 'license' for more information
IPython 6.3.1 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import torch

In [2]: torch.__version__
Out[2]: '0.4.0'

In [3]: tensor_1_float = torch.tensor([1, 2, 3], dtype=torch.float)

In [4]: tensor_2_float = torch.tensor([4, 5, 6], dtype=torch.float)

In [5]: torch.stack([tensor_1_float, tensor_2_float])
Out[5]:
tensor([[ 1.,  2.,  3.],
        [ 4.,  5.,  6.]])

In [6]: tensor_1_half = torch.tensor([1, 2, 3], dtype=torch.half)

In [7]: tensor_2_half = torch.tensor([4, 5, 6], dtype=torch.half)

In [8]: torch.stack([tensor_1_half, tensor_2_half])
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-8-10f49aa85ea5> in <module>()
----> 1 torch.stack([tensor_1_half, tensor_2_half])

RuntimeError: _cat is not implemented for type torch.HalfTensor

On PyTorch 0.3.1, one gets basically the same error when one tries to do the same thing:

TypeError: Type torch.HalfTensor doesn't implement stateless method cat
@soumith
Copy link
Member

soumith commented Apr 25, 2018

we dont support any operation other than copy and serialization on torch.HalfTensor (the CPU version). The GPU HalfTensor supports all operations

@apaszke
Copy link
Contributor

apaszke commented Apr 26, 2018

@soumith while we don’t want to have any math operations for half on CPU I feel like the argument given by @timothygebhard for having cat is very reasonable. Cat can be implemented efficiently because it doesn’t care how the data looks like, it’s only copying raw bytes around.

@apaszke apaszke reopened this Apr 26, 2018
@zou3519 zou3519 added the todo Not as important as medium or high priority tasks, but we will work on these. label May 14, 2018
@ezyang ezyang added module: bootcamp We plan to do a full writeup on the issue, and then get someone to do it for onboarding and removed module: bootcamp We plan to do a full writeup on the issue, and then get someone to do it for onboarding labels Aug 17, 2018
@ezyang
Copy link
Contributor

ezyang commented Aug 17, 2018

If you do this, it would probably be a good idea to port THTensor_(cat) to ATen while you're at it.

@adam-dziedzic
Copy link

I got a very similar error: RuntimeError: _cat_out is not implemented for type torch.HalfTensor. To my mind, we could add support for this method for data preparation/loading on CPU so that a single tensor type (namely torch.half) is used on CPU and GPU.

@towr
Copy link

towr commented Jan 3, 2019

I ran into this issue with a DataLoader that works on a numpy.fp16 dataset.
Writing a custom collate_fn for the dataloader solves the immediate problem, but it would be nice if it works out of the box.

zdevito pushed a commit to zdevito/ATen that referenced this issue Jan 29, 2019
Summary:
Fixes pytorch/pytorch#6968

Needed for #14705
Pull Request resolved: pytorch/pytorch#16389

Differential Revision: D13861446

Pulled By: gchanan

fbshipit-source-id: 7b8700b95aaf252d9669693dbddccb2302e58409
@TimZaman
Copy link

I'm confused, why did we close this issue? There was an MR for this that looked like it worked, but was subsequently closed, together with this issue/

@soumith
Copy link
Member

soumith commented Apr 26, 2019

as replied on the PR, it's merged into master, it's available in the nightlies, and will be part of 1.1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
todo Not as important as medium or high priority tasks, but we will work on these.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants