Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test data leaked into meta training #3

Closed
xiangjjj opened this issue Jun 15, 2018 · 9 comments
Closed

Test data leaked into meta training #3

xiangjjj opened this issue Jun 15, 2018 · 9 comments

Comments

@xiangjjj
Copy link

In the forward() function of maml.py (see here), the meta optimizer steps are taken no matter the input data is for training or for testing.
This results in the leakage of testing examples to the meta-learner which needs to be guarded by the training flag.

@dragen1860
Copy link
Owner

NO.
in meta-learning setting, meta-training includes training and testing, meta-test includes training and testing as well. The label space between meta-training and meta-testing is separated properly. For sub-stage, say training and testing in meta-train or meta-testing, the leakage is normal, not an error.

@xiangjjj
Copy link
Author

Thank you for your response!
I am aware that the sub-stage update is achieved through fast_weights. My question is, at the end of the forward() function, does self.meta_optim.step() update parameters of the base learner, i.e., the \theta in the paper?

@dragen1860
Copy link
Owner

dragen1860 commented Jun 18, 2018

Yes, the meta_optim.step() will update \theta parameters only. The updates of fast_weights will be ignored in sub-stage of test.

@xiangjjj
Copy link
Author

xiangjjj commented Jun 18, 2018

At the meta-testing phrase, we need to forward() the meta-test tasks to evaluate the corresponding testing performance at the meta level (not the task level). More importantly, we only want to get the fast weights for those tasks, because the meta test tasks should not be used to update the meta learner, i.e., \theta. However, the \theta is also updated since self.meta_optim.step() is always part of forward().

Do you think there is any need to take out self.meta_optim.step() in meta testing?

@xiangjjj
Copy link
Author

In stead of self.meta_optim.step(), shouldn't we do the following?

if training is True: self.meta_optim.step()

@dragen1860
Copy link
Owner

Oh, maybe u r right.
During the test, no need to optimize the \theta parameters.
U can try it and see any problems.

@xiangjjj
Copy link
Author

I have already tried and it turned out the optimization step is required to be left out in meta testing. Otherwise it would give overly optimistic results for be trained on meta test sets.

@dragen1860
Copy link
Owner

Ok, thx. Feel free to submit a PR.

@dragen1860
Copy link
Owner

Hi, all. Please git pull to get the latest version which solved test data leaking bugs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants