Skip to content
forked from ytsmiling/lmt

Public code for a paper "Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks."

License

Notifications You must be signed in to change notification settings

brandon-edwards/lmt

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LMT

Public code for a paper "Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks."

Dependent library:

How to train:

python3 train.py (configuration-file-name).py

what is the configuration file?

example:

python3 train.py config/parseval_svhn/default.py --gpu 0

How to evaluate with attacks:

python3 evaluate.py (result-dir-of-trained-network) (attack-configuration).py

what is the result directory?

example:

python3 train.py result/config/parseval_svhn/default-00 config/attack/deep_fool.py

Reference

Y. Tsuzuku, I. Sato, M. Sugiyama: Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks, (2018), url, bibtex

About

Public code for a paper "Lipschitz-Margin Training: Scalable Certification of Perturbation Invariance for Deep Neural Networks."

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%