Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add feature ConfidentMisclassification #151

Merged
merged 2 commits into from
Jul 10, 2018

Conversation

wfleshman
Copy link
Contributor

This is a criteria for generating high confident misclassifications without conditioning on a target class. Attacks of this kind are useful when evaluating the robustness of adversarial defenses.

@coveralls
Copy link

Coverage Status

Coverage increased (+0.5%) to 100.0% when pulling aeea3f2 on wfleshman:add-confidence into 6f3a637 on bethgelab:master.

@coveralls
Copy link

coveralls commented May 29, 2018

Coverage Status

Coverage decreased (-0.04%) to 99.483% when pulling fcd54e0 on wfleshman:add-confidence into 6f3a637 on bethgelab:master.

@jonasrauber
Copy link
Member

jonasrauber commented Jun 25, 2018

Looks good. We still need a test in https://github.com/bethgelab/foolbox/blob/master/foolbox/tests/test_criteria.py and an entry in the list of criteria at the beginning of https://github.com/bethgelab/foolbox/blob/master/foolbox/criteria.py#L14, then we should be good to go 👍

@jonasrauber
Copy link
Member

I am not sure why coveralls reported 100 % test coverage, because clicking on details clearly shows that the lines are not covered 🤔

@jonasrauber jonasrauber merged commit 835f8e3 into bethgelab:master Jul 10, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants