Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

INTERNAL ASSERT FAILED got size:5 #643

Closed
PrefectSol opened this issue Jan 29, 2024 · 3 comments
Closed

INTERNAL ASSERT FAILED got size:5 #643

PrefectSol opened this issue Jan 29, 2024 · 3 comments

Comments

@PrefectSol
Copy link

I trained the model and exported it to the torch script format python export.py --weights runs\train\palm_detector\weights\best.pt --imgsz 640 --device 0. Next, I want to run images with different batch sizes on the GPU, but I get an error. If you use the CPU or run images with only one constant packet size (but it can only be changed three times), then everything is OK. The problem is that I can only get output from 3 runs, but no more - these are not necessarily 1,2,4 packets, of course they can be 4,8,16 and others - any 3 different ones
batch memory
1 677380096
2 809500672
4 1067450368
The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: dims.size() == 4 || dims.size() == 3INTERNAL ASSERT FAILED at "C:\actions-runner\_work\pytorch\pytorch\builder\windows\pytorch\torch\csrc\jit\tensorexpr\expr.cpp":379, please report a bug to PyTorch. got size:5

at::Tensor batch = torch::from_blob(preparedImages,
    { batchSize, m_inputSize, m_inputSize, m_channels });
batch = batch.permute({ 0, 3, 1, 2 });
batch = batch.to(torch::Device(m_deviceName));

torch::Tensor output = m_module.forward({ batch }).toTuple()->elements()[0].toTensor();

The error occurs precisely when calling forward

libtorch version: 1.11.0+cuda11.3

@PrefectSol
Copy link
Author

OK, I fixed this by changing the lib torch version to 1.10.1, but here's my question. Is it possible to change not libtorch, but the pytorch version when learning on 1.11.0

@PrefectSol
Copy link
Author

OK, I fixed this by changing the lib torch version to 1.10.1, but here's my question. Is it possible to change not libtorch, but the pytorch version when learning on 1.11.0

ImportError: DLL load failed while importing nms_rotated_ext in train.py

@PrefectSol
Copy link
Author

#607

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant