M.Sc. dissertation, Fadi Younis, July 2018. Defended on April 17th, 2018.
Conference Paper:_ https://link.springer.com/chapter/10.1007/978-3-030-29729-9_2 Thesis Defense Slides:_ https://drive.google.com/file/d/1F7CWZvY76RD4XwYqr8zTuAZlS_D6jCeY/view?usp=sharing
Mirrors:
License: BSD 3 clause
Contact: Fadi Younis (@FadiSYounis, fadiyounis.atwork@gmail.com)
Click on grad-thesis.pdf
up there ^ or here. Github should do the rest.
The Ryerson University Grad Student Template. It was retrived from here (https://phymbie.physics.ryerson.ca/howto/)
June 10th 2017 - April 5th 2018
The work herein is Copyright 2017--2018 Fadi Younis and Ryerson University. No rights are given to reproduce or modify this work.
The market demand for online machine-learning services is increasing, and so to have are the threats to them. Adversarial inputs represent a new threat to Machine- Learning-as-a-Services (MLaaSs). Meticulously crafted malicious inputs can be used to mislead and confuse the learning model, even in cases where the adversary only has access to input and output labels. As a result, there has been increased interest in defence techniques to combat these types of attacks.
In this thesis, we propose a network of high-interaction honeypots as a decentral- ized defence framework that prevents an adversary from corrupting the learning model, primarily through the use of deception. We accomplish our aim by:
- preventing the attacker from correctly learning the labels and approximating the architecture of the black-box system
- luring the attacker away, towards a decoy model, using HoneyTo- kens
- creating infeasible computational work for the adversary.
GoLang, Bash, 3rd party software
Fadi Younis
July 2018