Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ACE filter function creates noisy images due to data processing #3

Open
FahadahmedK opened this issue Nov 1, 2023 · 3 comments
Open

Comments

@FahadahmedK
Copy link

FahadahmedK commented Nov 1, 2023

Hi,

I trained the DDPM generator using your code on the CelebA dataset. I noticed that the filter function in main.py creates a noisy counterfactual explanation due to the following data processing step:

ce = (pe.detach() - 0.5) / 0.5

I found this problem when I performed some sanity checks, for instance, by running the ace filter function without any attack iterations. When I commented out the above data processing step, however, I was able to create a way cleaner image. Would you have any idea why this could be happening?

I am also attaching the noisy (created with no changes to the code) and the clean (created with data processing step commented out) counterfactuals. Kindly note that these counterfactuals are produced without any attack. Thank you.
noisy_ce
ce

@guillaumejs2403
Copy link
Owner

Hi,

With this information, it is a little hard to understand what is happening. But I'll give it a try.

  1. When you say you are using no attack, do you mean you are using the flag --attack_method=None? If you are filtering the image, are you sure your model is working fine? Could you show me the original image?
  2. How are you debugging the code? Looking at it, if you commend L157, it should not work since in L157 it is the first time the variable ce is instantiated. Thus, L168 should get an error. My guess is that you are generating the ce variable a priori and the function is working fine.
  3. Are you looking at the correct outputs of the function? I guess you do :P, but just confirm it - first is the ce, second pe, third noise, and last mask.

Kindly,
Guillaume

@FahadahmedK
Copy link
Author

FahadahmedK commented Nov 3, 2023

Hi,

Thank you for your answer, first of all. I shall try to answer your questions and explain my issues further with as many details as I can:

  1. To not attack at all, I am merely setting the attack_iterations parameter to 0. I wanted to check why I am only able to retrieve noisy counterfactuals (I will further explain in 2 how I exactly debugged the code). I used diffusion.q_sample() and diffusion.p_sample() methods to check if the generator I trained using your code worked well in the first place - and it did. Below I have attached the original image, along with the corrupted version using q.sample(), and the filtered version using p.sample() methods.
    img_forward_reverse

  2. I am basically saving all the outputs for debugging including boolmask, pe_mask, ce_mask, pe, and ce. I did not just comment out L157 but I changed it slightly. I removed the rescaling step and changed it to ce = pe.detach() and also commented out L187, of course. I ran the code with the aforementioned settings and found out that it produced a cleaner image. My finding was that when this rescaling step is performed, the filtering process produces a noisy image as I showed to you in my first post.

  3. As far as I know, I think I am looking at the correct outputs because I save all the outputs directly within the function for visual inspection.

Kind regards,

Fahad

@guillaumejs2403
Copy link
Owner

From what you say, I suspect that you trained your model without any normalization. Please try using the model I provided as another sanity check (Link)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants