Skip to content

ehsannowroozi/TransferabilityForensicNetworks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 

Repository files navigation

On the Transferability of Adversarial Examples against CNN-based Image Forensics

(https://ieeexplore.ieee.org/abstract/document/8683772)

2018-2019 Department of Information Engineering and Mathematics, University of Siena, Italy.

Authors: Mauro Barni, Kassem Kallas, Ehsan Nowroozi Personal Website: www.enowroozi.com, Benedetta Tondi (Ehsan.Nowroozi65@gmail.com)

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see http://www.gnu.org/licenses/.

If you are using this software, please cite from IEEE. Also, you can find this paper in the arXiv.

Cite

@INPROCEEDINGS{8683772, author={M. {Barni} and K. {Kallas} and E. {Nowroozi} and B. {Tondi}}, booktitle={ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, title={On the Transferability of Adversarial Examples against CNN-based Image Forensics}, year={2019}, volume={}, number={}, pages={8286-8290}, keywords={convolutional neural nets;image forensics;learning (artificial intelligence);object detection;security of data;called adversarial examples;CNN-based image forensic tools;CNN models;security-oriented applications;attack transferability;image forensics applications;forensic analyst;attacker;convolutional neural networks;Adversarial multimedia forensics;adversarial machine learning;adversarial examples;attack transferability;image forensics}, doi={10.1109/ICASSP.2019.8683772}, ISSN={1520-6149}, month={May},}

Abstract

Recent studies have shown that Convolutional Neural Networks (CNN) are relatively easy to attack through the generation of so called adversarial examples. Such vulnerability also affects CNN-based image forensic tools. Research in deep learning has shown that adversarial examples exhibit a certain degree of transferability, i.e., they maintain part of their effectiveness even against CNN models other than the one targeted by the attack. This is a very strong property undermining the usability of CNN's in security-oriented applications. In this paper, we investigate if attack transferability also holds in image forensics applications. With specific reference to the case of manipulation detection, we analyse the results of several experiments considering different sources of mismatch between the CNN used to build the adversarial examples and the one adopted by the forensic analyst. The analysis ranges from cases in which the mismatch involves only the training dataset, to cases in which the attacker and the forensic analyst adopt different architectures. The results of our experiments show that, in the majority of the cases, the attacks are not transferable, thus easing the design of proper countermeasures at least when the attacker does not have a perfect knowledge of the target detector.