Skip to content

jinghuichen/BackwardSmoothing

Repository files navigation

BackwardSmoothing

This is the official code for our paper Efficient Robust Training via Backward Smoothing (aceepted by AAAI'2022) by Jinghui Chen (PSU), Yu Cheng (Microsoft), Zhe Gan (Microsoft), Quanquan Gu (UCLA), Jingjing Liu (Tsinghua University).

Prerequisites

  • Python (3.6.9)
  • Pytorch (1.7.1)
  • CUDA
  • numpy

BackwardSmoothing: A New Method for Efficient Robust Training

Arguments:

  • alpha: step size for perturbation
  • epsilon: input space perturbation strength
  • gamma: output space perturbation strength
  • beta: TRADES robust regularization parameter

Examples:

  • Train Backward Smoothing on CIFAR10 using Resnet-18:
  $ python3 train_trades_backward.py --arch resnet --dataset cifar10 --beta 10.0 --gamma 1.0 --alpha 0.031 --epsilon 0.031

Reference

For technical details and full experimental results, please check the paper.

@inproceedings{chen2022efficient, 
	author = {Chen, Jinghui and Cheng, Yu and Gan, Zhe and Gu, Quanquan and Liu, Jingjing}, 
	title = {Efficient robust training via backward smoothing}, 
	booktitle = {AAAI},
	year = {2022}
}

About

Github Repo for AAAI'22 Paper: Efficient Robust Training via Backward Smoothing

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages