-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
It is too slow for irtr #12
Comments
Thanks! It is normal as for N test instances, ranking requires computing features for NxN image-text pairs. One potential way to increase the inference speed is caching the image and text features at the bottom layers, which I left as a TODO here https://github.com/zdou0830/METER/blob/main/meter/modules/objectives.py#L315. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hi authors,
Thanks for your great work!
I am trying to reproduce the results but found it is too slow for irtr testing.
It seems that it needs 38 hours for inference on advanced computing architectures (like A100), specifically it is the rank loop that costs so much time.
It is normal?
Thanks!
The text was updated successfully, but these errors were encountered: