Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

art.attack.evasion.LowProFool encounter bugs when using L_1-norm (or 0<p<2) #1970

Open
ZhipengHe opened this issue Dec 14, 2022 · 1 comment

Comments

@ZhipengHe
Copy link

ZhipengHe commented Dec 14, 2022

Describe the bug

When using art.attack.evasion.LowProFool method, if I set parameter norm in the range of $p \in (0, 2)$, the attack model will report ValueError.

To Reproduce
Steps to reproduce the behavior:

  1. Go to my notebook (gist)
  2. When set the norm>=2 or norm='inf' , the attack model works well. For example,
success_rate=test_general_cancer_lr(breast_cancer_dataset(splitter()), norm=2)
print(success_rate)

Result is :

1.0
  1. When set the 0<norm<2, the attack model doesn't work. For example,
success_rate=test_general_cancer_lr(breast_cancer_dataset(splitter()), norm=1)
print(success_rate)

Error:

/usr/local/lib/python3.8/dist-packages/art/attacks/evasion/lowprofool.py:159: RuntimeWarning: divide by zero encountered in power
  self.importance_vec * self.importance_vec * perturbations * np.power(np.abs(perturbations), norm - 2)
/usr/local/lib/python3.8/dist-packages/art/attacks/evasion/lowprofool.py:159: RuntimeWarning: invalid value encountered in multiply
  self.importance_vec * self.importance_vec * perturbations * np.power(np.abs(perturbations), norm - 2)
...
...
ValueError: Input contains NaN, infinity or a value too large for dtype('float32').

From RuntimeWarning: divide by zero encountered in power,
In LowProFool L307-L313 , the attack model initialized the perturbation with np.zero

# Initialize perturbation vectors and learning rate.
perturbations = np.zeros(samples.shape, dtype=np.float64)
eta = self.eta

# Initialize 'keep-the-best' variables.
best_norm_losses = np.inf * np.ones(samples.shape[0], dtype=np.float64)
best_perturbations = perturbations.copy()

In LowProFool L148-L171 , when 0< norm <2, it will encounter divide by zero error.

  numerator = (
      self.importance_vec * self.importance_vec * perturbations * np.power(np.abs(perturbations), norm - 2)
  )

Expected behavior
The attack model should also generate adversarial examples succuessfully.

System information (please complete the following information):

  • Windows 11
  • Python 3.8
  • ART 1.12.2
  • scikit-learn 1.0.2
@ZhipengHe ZhipengHe changed the title art.attack.evasion.LowProFool encounter bugs when using L_1-norm art.attack.evasion.LowProFool encounter bugs when using $\ell_1-norm$ Dec 14, 2022
@ZhipengHe ZhipengHe changed the title art.attack.evasion.LowProFool encounter bugs when using $\ell_1-norm$ art.attack.evasion.LowProFool encounter bugs when using L_1-norm (or 0<p<2) Dec 14, 2022
@beat-buesser
Copy link
Collaborator

beat-buesser commented Jan 9, 2023

Hi @ZhipengHe Thank you very much for the detailed description of the issue! Please apologize my delayed response. It looks like the issue is that if in the last code segment in our message above for norm < 2 will result in 1 / perturbations^abs(norm). This creates the division by zero error of any element of perturbations is zero. I think for norm < 2 we would have to add a check that perturbations has to be larger than zero at all times.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants