Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Attention] Information leak in visualization #40

Closed
tonysy opened this issue Nov 30, 2021 · 3 comments
Closed

[Attention] Information leak in visualization #40

tonysy opened this issue Nov 30, 2021 · 3 comments

Comments

@tonysy
Copy link

tonysy commented Nov 30, 2021

God job.

Hi, I think this operation will leak the information of the original input image(mean and var of one patch).

rec_img = rec_img * (img_squeeze.var(dim=-2, unbiased=True, keepdim=True).sqrt() + 1e-6) + img_squeeze.mean(dim=-2, keepdim=True)

Anyone who uses the model trained with the normalized loss for visualization should pay attention to this operation.

I also suggest the author add the comment on this line. @pengzhiliang

@pengzhiliang
Copy link
Owner

Thank you! You are right.

Because the target in pre-training process is normalized, so the predict of model is unreal.
To visualize the reconstruction image, we add the predict and the original mean and var of each patch.

So, to avoid it, you need to use the real pixels as the target by setting --normlize_target to False.

In fact, I am not sure that the reconstruction images shown in the paper from what kind of supervision.

And I will add the comment to avoid misunderstanding.

@MingfangDeng
Copy link

Dear author,
where the "--normlize_target"?I cannot find it. I hope you can say specificly,thank you.

@MingfangDeng
Copy link

OK,I find it,sorry to interrupt you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants