Skip to content
Switch branches/tags
Go to file

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder

This repo contains the code and data of the following paper:

Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder, Guanlin Li, Shuya Ding, Jun Luo, Chang Liu, CVPR2020 [pdf]


We propose an attack-agnostic defence framework to enhance the intrinsic robustness of neural networks, without jeopardizing the ability of generalizing clean samples. Our Feature Pyramid Decoder (FPD) framework applies to all block-based convolutional neural networks (CNNs). It implants denoising and image restoration modules into a targeted CNN, and it also constraints the Lipschitz constant of the classification layer.

Training Strategy

Implementation details of two-phase training strategy utilizing self-supervised and multi-task learning: the enhanced CNN FPD, in which FPD_R refers to the image restoration module; FPD_FD stands for the front denoising module; FPD_BD stands for the back denoising module; FPD_LCC refers to the modified classification layer; x_noisy are the samples in the ε-neighbourhood of each image. The first phase training is optimized by L_2(x_clean,x_clean') loss. If L_2 loss >T, only the parameters of FPD_R and FPD_FD is updated. Once the L_2 loss reaches the T, the cross-entropy (CE) loss with L_2 loss jointly trains the enhanced CNN. Then, the second phase train the enhanced CNN further, jointly optimized by CE loss and L_2 loss.


  • The MNIST and SVHN can be downloaded by torchvision.datasets
  • The CALTECH-101 and CALTECH-256 can be downloaded from here and here, respectively.


Pre-trained Models

We upload all models we trained. You can download them freely from here.


  • Early stop to avoid overfitting.
  • In every folder, we upload the enhanced network based on different backbone models.
  • Two-phase training strategy to train our enhanced models.
  • To have better results, we introduce adversarial training after two-phase training.
  • For black-box attack, using as the attacking reference.
  • Attack models via various methods on different datasets.


If you find it useful for your research, please consider citing the following reference paper:

author = {Li, Guanlin and Ding, Shuya and Luo, Jun and Liu, Chang},
title = {Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder},
booktitle = {The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}


Enhancing Intrinsic Adversarial Robustness via Feature Pyramid Decoder(CVPR2020)





No releases published


No packages published