Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Imagewoof - Resnet18/34 Training #55

Open
AhmedHussKhalifa opened this issue Aug 18, 2021 · 1 comment
Open

Imagewoof - Resnet18/34 Training #55

AhmedHussKhalifa opened this issue Aug 18, 2021 · 1 comment

Comments

@AhmedHussKhalifa
Copy link

AhmedHussKhalifa commented Aug 18, 2021

Hey,
I am trying to produce a baseline model for imagewoof as it is hard to find a pretrained model.
I trained both resnet-18/34 by using Adam optimizer. I have reached 81.55/82.62% for top-1 accuracy.

I found some papers reporting that imagenette and imagewoof are 13k samples for training, where I expect this is would be the first version.

  1. Could anyone post a link for the 1st version of them? I want to reproduce their results.

  2. Could we use both datasets to fine-tune the hyperparameters for any training algorithm to generalize it on ImageNet?

  3. Is Imagenette and imagewoof training sets contains some samples from the training set? I checked the files content and I found some but I need someone to confirm this.

image
Ref: Data Efficient Stagewise Knowledge Distillation

@MaxVanDijck
Copy link

I know this is outdated, but as for Q2. I'm actually investigating training algorithms on ImageNet-1k and Imagenette and how their accuracies compare. So far I haven't found a good correlation between the results on both datasets, e.g. VGG-11 performs essentially onpar ResNeXt-32x4d-50 when benchmarked on Imagenette with the same fixed training parameters. ConvNeXt performs pretty poorly on Imagenette etc. Bias-Variance trade-off plays a huge role I imagine.

Obviously I'm looking more into this but it seems like a correlation between the two just doesn't exist. I'm also looking at ImageWoof moving forward, maybe the fine-grained classes provide indicative performance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants