Skip to content

Latest commit

 

History

History
7 lines (5 loc) · 775 Bytes

README.md

File metadata and controls

7 lines (5 loc) · 775 Bytes

Code accompanying our paper

Fooling the classifier: Ligand antagonism and adversarial examples

We have developed the idea that the phenomenon of adversarial examples and ligand antagonism are instances of the same general concept of fooling a decision-maker. The scripts reproduce the figures in the paper. By playing with parameters you can get a feeling for the involved dynamics yourself. We encourage attempts in producing a similar MTL -> ML images that demonstrate robust adversarial defence, and we are looking forward to more examples of adversarial/ambiguous boundary images.

Authors

Thomas J. Rademaker, Emmanuel Bengio, Paul François