Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluate the representation robustness is still a training process? #1

Closed
zoeleeee opened this issue Oct 13, 2020 · 1 comment
Closed

Comments

@zoeleeee
Copy link

I test the command under Evaluate the representation robustness, it is still a training process, instead of the statement "load a pre-trained model and test the mutual information between input and representation." So this cannot be a evaluation measure for any pre-trained model, right? or is there something I understand or use in a wrong way?

@schzhu
Copy link
Owner

schzhu commented Oct 13, 2020

Hi zoeleeee, thanks for your interest! It's indeed an optimization-based evaluation so it looks like another training process.

Since mutual information is difficult to estimate directly, by referring to MINE (https://arxiv.org/pdf/1801.04062.pdf) we represent the mutual information (KL-divergence) in terms of its Donsker-Varadhan representation. Yet for such representation to hold we need to find an optimal measurable function given two distributions, and the optimization process is just to find such function. You can refer to MINE (https://arxiv.org/pdf/1801.04062.pdf) and this note (https://web.stanford.edu/class/stats311/lecture-notes.pdf) for more information about the Donsker-Varadhan representation of KL-divergence.

@schzhu schzhu closed this as completed Dec 13, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants