why the label are all some when training? #20
Comments
Hi @happyxuwork, thanks for your interest.
|
@MidoAssran if more label can be used in the ImageNet, for example 20% label data can be used, do you have any suggestions to imporve the performance? or some losses can be added? From your point of view, what's the difficulty in reaching the level of full supervision with 20 percent of the data? |
@happyxuwork Hi sorry for the delay getting back to you! Was on vacation :) Yes I think using more labels in support, if available, will directly improve performance. See Fig.7 in appendix B in the paper. I haven't tried using 20% of labels, but we see that by using wider ResNets (e.g., ResNet50 4x), we can already match fully supervised performance (without extra tricks like AutoAugment, etc.) with only 10% of labels (see Fig.6 in appendix B). Off the top of my head, I'm not sure what the "main difficulty" is, but I think there is certainly room for improvement, since performance with 1% labels is still significantly lower than performance with 10% labels. |
the idea in your paper is amazing,great truths are all simple. I have the following questions:
1、why the labels of each Iteration are same(support images are sampled with ClassStratifiedSampler meaning every sampling the same class and have the same class order?)
suncet/src/paws_train.py
Line 167 in 731547d
3、Have you considered label loss plus unlable loss as the final loss during training? In this way, finetune is not required.
The text was updated successfully, but these errors were encountered: