-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How was the fine-tunimg done exactly #32
Comments
Hi!! It's contrastive fine-tuning, we use the same task CLIP was trained on. All unfrozen. Let me know if you need more details! |
So when you say "same task CLIP was trained on" do I correctly assume you continued training without adding a classifier? |
Yup, we keep the same contrastive pre-training objective |
Thank you for the clarification and the super quick reply :) |
Happy to help!!
…On Tue, Apr 30, 2024, 14:14 travellingsash ***@***.***> wrote:
Thank you for the clarification and the super quick reply :)
—
Reply to this email directly, view it on GitHub
<#32 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AARBSS6EQCFIX7EIF344DW3ZAACRRAVCNFSM6AAAAABG7C3GB2VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOBXGM2DEOJXGI>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Hey there,
I am wondering how you did the fine-tuning here. You do not describe it in the paper.
Did you
I don't think you did 2 or 3 since you used full sentences as captions.
How did you do it?
All the best
The text was updated successfully, but these errors were encountered: