New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About total data performance #2
Comments
hi thanks for the interest of our work. |
Hi, do you have the performance under cross-entropy training only? @junchen14 |
We did not report the performance under cross-entropy training only, because all the SOTA results are gained under reinforcement setting, we therefore also report the performance after reinforcement. |
Thanks! |
Hi, I am glad to read this article. This essay is the first work that focuses on efficiently adapting large pretrained language
models for image captioning, which inspires me a lot!
In the result display section, it mainly shows the results of training using some data sets at different sampling rates. Therefore, I would like to ask, have you tested the results on all data sets without sampling? How does the performance compare to M2Transformer?
The text was updated successfully, but these errors were encountered: