You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I tried to tune the batch size to 8 and tune the learning rates(adapter block lr and model lr) to 8 times the previous synchronously, the acc fall to 50%. I know it's not your fault, for I have trained the original baseline (Music-avqa), it also has such a problem.
I just want to discuss about that, did you meet the same problem when you tune the model? and how do you tackle it? I know the grid search for hyperparameters with lavish in avqa task is time consuming.
The text was updated successfully, but these errors were encountered:
I tried to modify the batch size without lr changing, the training becomes relatively stable. It's different from my past experience. But I can continue to train it at least.
When I tried to tune the batch size to 8 and tune the learning rates(adapter block lr and model lr) to 8 times the previous synchronously, the acc fall to 50%. I know it's not your fault, for I have trained the original baseline (Music-avqa), it also has such a problem.
I just want to discuss about that, did you meet the same problem when you tune the model? and how do you tackle it? I know the grid search for hyperparameters with lavish in avqa task is time consuming.
The text was updated successfully, but these errors were encountered: