Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BURN_IN_STEP #8

Open
SayBender opened this issue Sep 14, 2022 · 1 comment
Open

BURN_IN_STEP #8

SayBender opened this issue Sep 14, 2022 · 1 comment

Comments

@SayBender
Copy link

SayBender commented Sep 14, 2022

Dear Authors, can you explain what exactly is the burn in step? How does it affect the training? what are the extreme values you have tested? How is burn in step related to other variables?

And why my training gets out of memory error exactly on burn in step? even after changing the pixel size from 600 to 400 and adjusting the code accordingly.

This is the error that I get right at the epoch that is burn-in step. The code runs fine until then. Even when I bring BURNIN STEP at 2nd epoch for instance, it still goes out of memory!? Do you have any idea?

return forward_call(*input, **kwargs) File "/home/say/NEmo/omni-detr/models/deformable_transformer.py", line 221, in forward src2 = self.self_attn(self.with_pos_embed(src, pos), reference_points, src, spatial_shapes, level_start_index, padding_mask) File "/home/say/.conda/envs/deformable/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl return forward_call(*input, **kwargs) File "/home/say/NEmo/omni-detr/models/ops/modules/ms_deform_attn.py", line 105, in forward + sampling_offsets / offset_normalizer[None, None, None, :, None, :] RuntimeError: CUDA out of memory. Tried to allocate .....

Thank you

@peiwang062
Copy link
Collaborator

Our code only supports two options for the resolution (600/800). Other settings may cause unexpected errors. After burnin, there will be a model duplication. This doubles the memory. So maybe need to check the memory usage during burnin to look at if at least half of the spare memory left. Also, note that current code only supports batch size equal to 1. Large batch size may also cause out of memory.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants