You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
RuntimeError: input.is_contiguous() INTERNAL ASSERT FAILED at "CUDA/softpool_cuda.cpp":95, please report a bug to PyTorch. input must be a contiguous tensor.
I met the above error, how to solve it? Thanks.
The text was updated successfully, but these errors were encountered:
Contiguous flag errors are related to PyTorch tensors not occupying single blocks of memory. They basically correspond to tensors that are both contiguous in memory and in the same memory order as its indices (an in-depth NumPy explanation can be found here). I have not yet added a check for this as it only arises in specific operations.
A simple solution would be to call .contiguous() before the operation. e.g. :
# Dummy initialisationx=torch.rand(1,3,224,224)
# Example of non-contiguous operationx=torch.transpose(x, 0,1,3,2)
# Make contiguous tensor copyx=x.contiguous()
# Some other operations...
Do note that .contiguous() will create a copy of the tensor if it is not contiguous, so if this is a common accurance in your code you may see some increased memory usage.
RuntimeError: input.is_contiguous() INTERNAL ASSERT FAILED at "CUDA/softpool_cuda.cpp":95, please report a bug to PyTorch. input must be a contiguous tensor.
I met the above error, how to solve it? Thanks.
The text was updated successfully, but these errors were encountered: