You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How does the max_epochs argument take part in the training process since there are only 4 epochs logged out?
Apart from changing the model from vitl16_384 to vitb32_384, is there anything wrong with my training script?
While training, this error is logged out:
[W reducer.cpp:283] Warning: Grad strides do not match bucket view strides. This may indicate grad was not created according to the gradient layout contract, or that the param's strides changed since DDP was constructed. This is not an error, but may impair performance.
grad.sizes() = [512, 256, 1, 1], strides() = [256, 1, 256, 256]
bucket_view.sizes() = [512, 256, 1, 1], strides() = [256, 1, 1, 1] (function operator())
While training, DPP is enabled and I only used 01 GPU with batch_size = 4. I am not sure if this damages training. Does the argument accumulate_grad_batches probably make this happen?
The text was updated successfully, but these errors were encountered:
As has been reported, we only train a few epochs of LSeg on zero-shot for PASCAL and COCO experiments. I haven't met such a problem, you might need to check your system for details. Also, we have released our checkpoints, please feel free to use them.
Hi @Boyiliee
Thank you for replying!
I'm trying to reproduce your results so whether your reported results matche mine is of critical importance.
Could you verify if my arguments and hyperparameters are reasonable? I'm training with 01 Tesla A100 80GB GPU.
Thank you for your great paper.
I tried to train a zero-shot model (vitl16_384) and tested on PASCAL fold 0 and got the following problems:
This is my training script:
How does the
max_epochs
argument take part in the training process since there are only 4 epochs logged out?Apart from changing the model from
vitl16_384
tovitb32_384
, is there anything wrong with my training script?While training, DPP is enabled and I only used 01 GPU with
batch_size = 4
. I am not sure if this damages training. Does the argumentaccumulate_grad_batches
probably make this happen?The text was updated successfully, but these errors were encountered: