TensorFlow has taken the deep learning world by storm. This workshop will be led by one of TensorFlow’s main contributors, Illia Polosukhin. Illia’s hands-on workshop will cover:
Dropout - both for preventing overfitting and as mechanics to get "what model doesn't know" (confidence of prediction).
Augmenting data with adversarial examples - to prevent overfitting and speed up training
How to limit technical exploits of your models - e.g. how to use different methods to prevent your model going haywire, using different methods (confidence, adversarial examples, discriminator, separate classifiers or just simple whitelists).
- Gal and Ghahramani, 2015: https://arxiv.org/pdf/1512.05287v5
- Srivastava et al., 2015: https://www.cs.toronto.edu/~hinton/absps/JMLRdropout.pdf
- Goodfellow et al., 2015: https://arxiv.org/pdf/1412.6572v3
- Miyato et al., 2016: https://arxiv.org/pdf/1507.00677v9