You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To my knowledge, CLIP can be directly used applied to zero-shot learning (i.e., unseen/novel classes).
coop and cocoop don't appear to be zero-shot learning, but require fine-tuning. However, I don't see the detials about how to fine-tuning in paper. Am I misunderstand it? In the meantime, I would like to know how the CLIP is fine-tuned.
I cannot understand the figure 1 in paper: why the performance of coop and cocoop can be compared to zero-shot learning.
The text was updated successfully, but these errors were encountered:
coop and cocoop don't appear to be zero-shot learning, but require fine-tuning. However, I don't see the detials about how to fine-tuning in paper. Am I misunderstand it? In the meantime, I would like to know how the CLIP is fine-tuned.
The text was updated successfully, but these errors were encountered: