-
Notifications
You must be signed in to change notification settings - Fork 204
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error in PyTorch-Lightning when Finetuning on VQA #3
Comments
I've just checked that your command works on my environment => It seems your error is caused because the Since PyTorch-lightning is a rapidly changing project, I can only guarantee my code for the specific version of PL as denoted in I strongly assume that your PL version mismatches with mine. Because |
Sorry - I had been getting an error with the earlier version of pytorch_lightning as well, and upgraded just to check if it made any difference. I identified the bug in my code that had caused the earlier error, though. |
Hello,
I am trying to finetune ViLT on the VQAv2 task - I created the
arrow_root
directory as instructed, and then ran:python run.py with data_root=<PROJECT_DIR>/arrow_root/vqav2/ num_gpus=1 num_nodes=1 task_finetune_vqa per_gpu_batchsize=64 load_path="weights/vilt_200k_mlm_itm.ckpt"
However, once the model begins training, I get the following error:
I printed the value of
training_step_output
right before the error:{'extra': {}, 'minimize': None}
. I am not too familiar with pyTorch-Lighting, but this doesn't seem to be the correct output.Am I missing any steps here, apart from creating the arrow data and running the model?
The text was updated successfully, but these errors were encountered: