Skip to content

My now-completed academic thesis, lo and behold!

Notifications You must be signed in to change notification settings

FadiSYounis/Thesis

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Thesis

Using Honeypots In A Decentralized Framework To Defend Against Adversarial Machine-Learning Attacks

M.Sc. dissertation, Fadi Younis, July 2018. Defended on April 17th, 2018.

Resources

Conference Paper:_ https://link.springer.com/chapter/10.1007/978-3-030-29729-9_2 Thesis Defense Slides:_ https://drive.google.com/file/d/1F7CWZvY76RD4XwYqr8zTuAZlS_D6jCeY/view?usp=sharing

Mirrors:

License: BSD 3 clause

Contact: Fadi Younis (@FadiSYounis, fadiyounis.atwork@gmail.com)

FAQ

Hey, How do I view your thesis?

Click on grad-thesis.pdf up there ^ or here. Github should do the rest.

Which template did you use?

The Ryerson University Grad Student Template. It was retrived from here (https://phymbie.physics.ryerson.ca/howto/)

How long did it take you to research, build and write your thesis?

June 10th 2017 - April 5th 2018


Copyright

The work herein is Copyright 2017--2018 Fadi Younis and Ryerson University. No rights are given to reproduce or modify this work.

Short Summary:

The market demand for online machine-learning services is increasing, and so to have are the threats to them. Adversarial inputs represent a new threat to Machine- Learning-as-a-Services (MLaaSs). Meticulously crafted malicious inputs can be used to mislead and confuse the learning model, even in cases where the adversary only has access to input and output labels. As a result, there has been increased interest in defence techniques to combat these types of attacks.

In this thesis, we propose a network of high-interaction honeypots as a decentral- ized defence framework that prevents an adversary from corrupting the learning model, primarily through the use of deception. We accomplish our aim by:

  1. preventing the attacker from correctly learning the labels and approximating the architecture of the black-box system
  2. luring the attacker away, towards a decoy model, using HoneyTo- kens
  3. creating infeasible computational work for the adversary.

Tools Used:

GoLang, Bash, 3rd party software


Fadi Younis
July 2018