New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Prior p(w) #79
Comments
Hi Ze, Sorry for the late reply! Yes, the prior is fixed in the implementation: we tried to optimize the Gaussian mean but didn't yield a significant difference in terms of the performance. This however makes sense because the parameterization has to match the aggregated posterior q(w | t) as shown in our Theorem 1. Therefore, simply making p(w) a Gaussian is sub-optimal. We stopped exploring more powerful parameterization for p(w) because that wasn't the emphasis of the paper. We are investigating this for future work. For example, letting p(w) to be an auto-regressive model sounds a good idea... |
Thanks for the reply Shell. Ze, I'm closing the issue but feel free to reopen if there are more questions. |
Thanks for the answer! There indeed should be a lot of work we can do on p(w). |
Hi @ZeWang95, sorry for my slow responses!
You're right! What I wrote in the comment was incorrect. I'll add the log p(w) term back to the synthetic gradient updates. That will make the implementation precisely an empirical Bayes. Thanks for pointing this out! |
Dear authors,
According to your ICLR paper Empirical Bayes Transductive Meta-Learning with Synthetic Gradients, it appears p(w) is adjustable and is trained to achieve empirical bayes.
However, in your code, sib.py line 29, I belive the p(w) is fixed as a zero-mean gaussian.
Please correct me if I were wrong, but how is this implementation achieving empirical bayes?
Thank you in advance!
The text was updated successfully, but these errors were encountered: