New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Doubt on the prompts used for zero-shot evaluation in the notebook #198
Comments
Hi Sayak Yes, using "7 best" instead of 80 prompts makes the Colab quite a bit faster. The Colab you mention states
Do you think we should state this more clearly in the Colab? Best, Andreas |
Yes, I think. So. That will be definitely close the knowledge gap a bit. Closing this thread. |
Added the There is a small increase in performance when using 80 prompts as compared to only 7, but it can make the handling a bit more complicated (e.g. with 1k class names, device memory becomes an issue, and one has to iterate through the tokenized texts to embed them, which makes the code more complicated). Results in the Colab are:
|
Thanks, Andreas! |
@andsteing
The OpenAI notebook has got 80 templates: https://github.com/openai/CLIP/blob/main/notebooks/Prompt_Engineering_for_ImageNet.ipynb.
I wanted to know why all the prompts were not used in the Colab? Is it just for keeping the runtime low or are there any other reasons?
The text was updated successfully, but these errors were encountered: