-
Notifications
You must be signed in to change notification settings - Fork 386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: given chunk sizes don't sum up to the tensor's size (sum(chunk_sizes) == 48, but expected 1) #39
Comments
The length of chunk size is the number of GPUs. And the batch size is the sum of chunks size. |
@hsj928
|
so if we use one GPU, we can only use chunk size of 1? I'm struggling to use more of the memory of my GPU in this way, even when I set the batch size really high |
@Ostyk If you only have one GPU, modify the these codes in config/xxx.json: |
Thanks for the quick answer. I'll check out the new network, and since it's also anchor free I can use it as the backbone just like CenterNet for FAIRMOT (reidentificiation). |
still getting the error when I put batch_size == chunksizes |
@Ostyk Can i see your full log? |
Traceback (most recent call last): I read that it might have something to do with |
The text was updated successfully, but these errors were encountered: