Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Epoch autocounting, kissme and precision #30

Closed
voa18105 opened this issue Oct 29, 2017 · 3 comments
Closed

Epoch autocounting, kissme and precision #30

voa18105 opened this issue Oct 29, 2017 · 3 comments

Comments

@voa18105
Copy link

voa18105 commented Oct 29, 2017

Hello!
I have a few questions, may be you could tell me a simple solution and I wouldnt have to invent a bike...

  1. Is there a simple way to tell the code to count amount of epochs itself, based on amount of iterations passed?
  2. In my results kissme usually loose comparing with euclidean, is that alright? I was expecting that kissme should flawlessly dominate over euclidean metrics.
  3. After training with multiple datasets (from your list) the precision never improves, but rather drops to 30-40%. Is that alright? I was expecting that if to use more variable data than it should benefit precision, but it appears to get spoiled...

If you have any answers I'll be happy to get them as a help in my research. Also if you have any suggestions how to train it for maximal precision - it would be just great (I've seen your examples, but cannot get precision higher that 80% - I have only one gtx 1060 and cannot repeat experiments with batch = 256)

@voa18105 voa18105 changed the title Epoch autocounting Epoch autocounting, kissme and precision Oct 29, 2017
@Cysu
Copy link
Owner

Cysu commented Nov 8, 2017

  1. Why not using the epoch directly?
  2. KISSME could be worse than Euclidean sometimes, especially with deep learning features. IMO CNN itself learns linear metric implicitly. So traditional metric learning might not be helpful in such case.
  3. How did you use and split multiple datasets? Is test subset the same with single dataset training? If you simply mix the all the datasets together for evaluation, there will be much more gallery images, making the retrieval much more difficult.

@voa18105
Copy link
Author

voa18105 commented Nov 9, 2017

  1. Because of diffeence in datasets. If I am training with viper or dukemtmc - 100 epochs is far not the same amount of iterations. And I cannot really compare a trained network in similar conditions. I never know in advance how many epochs I need, unless I check amount of ids and images for ids, and count amount of iterations... well... whatever, not a serious problem
  2. I trained with one dataset, after with another one... and thus performed several iterations decreasing learning rate. I understand that merging datasets in advance would bring more benefits, but my idea was to check the fine-tuning ability, while using a model pre-trained with a different dataset

@Rizhiy
Copy link

Rizhiy commented Dec 14, 2017

  1. One epoch is defined as one pass over the whole dataset, so it doesn't really apply here. You can just make a global iteration counter and use that.
  2. You precision really depends on which dataset you use for the test. Unfortunately, all current datasets are not large enough to transfer to others properly.

@Cysu Cysu closed this as completed Dec 23, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants