Skip to content

jiandanjinxin/adversarial_workshop

 
 

Repository files navigation

Adversarial Workshop

TensorFlow has taken the deep learning world by storm. This workshop will be led by one of TensorFlow’s main contributors, Illia Polosukhin. Illia’s hands-on workshop will cover:

  • Dropout - both for preventing overfitting and as mechanics to get "what model doesn't know" (confidence of prediction).

  • Augmenting data with adversarial examples - to prevent overfitting and speed up training

  • How to limit technical exploits of your models - e.g. how to use different methods to prevent your model going haywire, using different methods (confidence, adversarial examples, discriminator, separate classifiers or just simple whitelists).

References

About

Adversarial Workshop code and presentation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%