You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Joint-Embedding Self Supervised Learning (JE-SSL) has seen a rapiddevelopment, with the emergence of many method variations but only fewprincipled guidelines that would help practitioners to successfully deploythem. The main reason for that pitfall comes from JE-SSL's core principle ofnot employing any input reconstruction therefore lacking visual cues ofunsuccessful training. Adding non informative loss values to that, it becomesdifficult to deploy SSL on a new dataset for which no labels can help to judgethe quality of the learned representation. In this study, we develop a simpleunsupervised criterion that is indicative of the quality of the learned JE-SSLrepresentations: their effective rank. Albeit simple and computationallyfriendly, this method -- coined RankMe -- allows one to assess the performanceof JE-SSL representations, even on different downstream datasets, withoutrequiring any labels. A further benefit of RankMe is that it does not have anytraining or hyper-parameters to tune. Through thorough empirical experimentsinvolving hundreds of training episodes, we demonstrate how RankMe can be usedfor hyperparameter selection with nearly no reduction in final performancecompared to the current selection method that involve a dataset's labels. Wehope that RankMe will facilitate the deployment of JE-SSL towards domains thatdo not have the opportunity to rely on labels for representations' qualityassessment.
AkihikoWatanabe
changed the title
あ
RankMe: Assessing the downstream performance of pretrained
self-supervised representations by their rank, Quentin Garrido+, N/A, arXiv'22
Jul 22, 2023
URL
Affiliations
Abstract
Translation (by gpt-3.5-turbo)
Summary (by gpt-3.5-turbo)
The text was updated successfully, but these errors were encountered: