Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: input.is_contiguous() INTERNAL ASSERT FAILED #6

Closed
ShuhanChen opened this issue Jan 12, 2021 · 1 comment
Closed

RuntimeError: input.is_contiguous() INTERNAL ASSERT FAILED #6

ShuhanChen opened this issue Jan 12, 2021 · 1 comment
Labels
enhancement New feature or request

Comments

@ShuhanChen
Copy link

RuntimeError: input.is_contiguous() INTERNAL ASSERT FAILED at "CUDA/softpool_cuda.cpp":95, please report a bug to PyTorch. input must be a contiguous tensor.
I met the above error, how to solve it? Thanks.

@alexandrosstergiou
Copy link
Owner

Hi @ShuhanChen,

Contiguous flag errors are related to PyTorch tensors not occupying single blocks of memory. They basically correspond to tensors that are both contiguous in memory and in the same memory order as its indices (an in-depth NumPy explanation can be found here). I have not yet added a check for this as it only arises in specific operations.

A simple solution would be to call .contiguous() before the operation. e.g. :

# Dummy initialisation
x = torch.rand(1,3,224,224)

# Example of non-contiguous operation
x = torch.transpose(x, 0,1,3,2)

# Make contiguous tensor copy
x = x.contiguous()

# Some other operations...

Do note that .contiguous() will create a copy of the tensor if it is not contiguous, so if this is a common accurance in your code you may see some increased memory usage.

Best,
Alex

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants