New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to train OFA for VQA in open-ended? #123
Comments
And I want to train and validate both in this manner. Thanks for your precious time! |
Hi, currently the VQA task code supports beam-search inference during validation and testing (in contrast with all-candidate inference, please refer to readme), but the finetuning objective still must be constrained with a pre-defined candidate answer set stored in |
@yangapku Hi, any updates on this? Thanks! |
Hi, a pull request related to this issue #124 has been proposed recently, which will add a new config to activate unconstrained finetuning. However, we find bugs are still existing in this PR, which will result in zero score during evaluation. We are still working on making it function correctly and will merge it ASAP. |
Hi, |
Any update on this? |
@qyc-98 @RishabhMaheshwary @ilovecv Hi, we have found the bug and fixed it! Now the latest codebase supports open-ended (unconstrained) VQA finetuning and evaluation. Please pull the latest code and refer to PR #124 & run_scripts/vqa/train_vqa_distributed.sh (Line 62-68) on how to activate it! |
Hi, are there any performance data for the open-ended VQA fine-tuning? |
@leng-yue We have tested open-ended VQA fine-tuning on OFA-base (without using EMA). It achieves 76.4 score on our VQA validation set. This performance can still be improved by using EMA and further hyper-param tuning. |
Thanks for your response, the result looks good :) |
Dear authors:
Thanks for the great work! In VQA validation, If I want the model to predict the most likely next token (i.e. generating a token in the answer) from the output logits. And then I append this token to the input and repeat this step until the model predicts ⟨EOS⟩. What could I do to achieve it? Thanks a lot!
The text was updated successfully, but these errors were encountered: