Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use same data for training and validation? #2

Closed
ahangchen opened this issue Mar 2, 2017 · 5 comments
Closed

Use same data for training and validation? #2

ahangchen opened this issue Mar 2, 2017 · 5 comments

Comments

@ahangchen
Copy link

ahangchen commented Mar 2, 2017

Because of the same operation to image's set attribution that function rand_same_class and function rand_diff_class use in training and validation, the model validates with almost the same data from training, which will lead to unreliable accuracy.

Detail:

In cnn_train_day.m, line98~99

[net, state] = processEpoch(net, state, params, 'train',opts) ;
[net, state] = processEpoch(net, state, params, 'val',opts) ;

in each epoch,

  • function processEpoch will do training with opts containing data indexes whose set == 1;
  • and do validation with opts containing data indexes whose set == 2;
  • and do eval in line 219~226
if strcmp(mode, 'train')
    net.mode = 'normal' ;
    net.accumulateParamDers = (s ~= 1) ;
    net.eval(inputs, params.derOutputs, 'holdOn', s < params.numSubBatches) ;
else
    net.mode = 'test' ;
    net.eval(inputs) ;
end

the key point is, in processEpoch, line 206

inputs = params.getBatch(params.imdb, batch,opts) ;

both of training and validation use function getBatch to generate inputs data.

and in function getBatch in train_id_net_res_2stream.m, line51~57:

for i=1:batchsize
    if(i<=half)
        batch2(i) = rand_same_class(imdb, batch(i));
    else
        batch2(i) = rand_diff_class(imdb, batch(i));
    end
end

and in rand_same_class.m line 5~8

while(output==index || imdb.images.set(output)~=1)
        selected = randi(numel(list));
        output = list(selected);
end

filter out image whose set is 2, meaning that function rand_same_class doesn't produce test data in validation as well as rand_diff_class.

Summary

Because of the same operation that functionrand_same_class and function rand_diff_class use in training and validation, the model validates with almost the same data from training, which will lead to unreliable accuracy.

Fix Advice

Add a parameter referred to the eval mode to function rand_same_class and rand_diff_class, and filter out image data whose set is 2 in training, filter out image data whose set is 1 in validation. If you confirm this bug, I can post a pull request to fix it.

@layumi
Copy link
Owner

layumi commented Mar 3, 2017

Hi, @ahangchen .
You're right that we put one validation data and one training data as a pair when validation.
So the score of objective-2 is different from objective-1 when validation.
However, it does not effect the final result. So I haven't changed the code.
And another reason is that the validation data is limited. Therefore I decide use the training data as another element in the pair when validation.

@ahangchen
Copy link
Author

@layumi

I can't agree that this doesn't effect the final result(the comparison accuracy)... The comparison is the result of feature extraction of two images. One of this extraction with familiar data will lead to higher accuracy of comparison while extraction with unseen data may lead to uncertain result and get lower accuracy.

Limitation of the validation data is a problem, but I prefer to reuse some of the validation data rather than reuse training data.

@layumi
Copy link
Owner

layumi commented Mar 5, 2017

@ahangchen First of all, thank you. It is considered to be a validation bug. But we think the implicit result that we used presently also reflects the code running correctly. Because the another part is validation data which is blind to the model. Furthermore, we do not really care validation classification since we test on a retrieval problem.
Second, I will consider it again and may fix it next week. I am writing the new version of this code. In the next version, I will also provide a more stable training method.

@layumi
Copy link
Owner

layumi commented Mar 12, 2017

@ahangchen
Today I update the second version of this repo and consider your suggestion again. It is nice to have a more reasonable validation curve.

But if we fix this, we need to ensure there are two validation images from same ID. We need to add some extra code to ensure this. I still think we do not really care about validation. To keep code simple, I decide to keep this version of validation. Besides, I still think the ideal curve should be similar to the validation curve now, if we train the model in the right way. So it is fine to use the present version.

Anyway, your suggestion is nice! Thank you very much.

@ahangchen
Copy link
Author

@layumi Got it. Thank you for your reply. 😄

@layumi layumi closed this as completed Sep 22, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants