Skip to content
This repository has been archived by the owner on Oct 31, 2023. It is now read-only.

About the 1% In1k semi-sup evaluation #9

Closed
merlinarer opened this issue Jun 24, 2022 · 1 comment
Closed

About the 1% In1k semi-sup evaluation #9

merlinarer opened this issue Jun 24, 2022 · 1 comment

Comments

@merlinarer
Copy link

Hello, thanks for your sharing,
I was littile confused about your 1% In1k semi-sup evaluation. You said in paper that the results come from logistic regression on the extracted representations. However, with the same ViT, I found this evaluation of iBoT come from end2end full fintuning(see here), and SwAV et. all fintuned the entire res50 encoder.

@MidoAssran
Copy link
Contributor

Hi @merlinarer,

Thanks for your message. Yes, one common evaluation is end-to-end fine-tuning with 100% labels. However, with 1% labels, iBOT achieves the best performance with logistic regression on the extracted (frozen) representations.

See Table 12 in their paper comparing fine-tuning to linear probing on 1% labels.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants