Brief research paper and codes about Adversarial Examples.
This paper is focused on studying a recently discovered issue affecting deep learning models: adversarial examples. Concretely, it has been found that by applying small but intentionally crafted perturbations to the inputs it is possible to drastically change the outputs. This is a serious problem in which a huge effort has been done in the last years in order to find a solution, but stills unsolved. We will focus on image classifiers based on deep neural networks, describing and analyzing two techniques for generating adversarial examples. We will perform some experiments with these methods, considering different situations.