Skip to content

ZeanQin/adversarial-machine-learning

Repository files navigation

Adversarial Machine Learning Project

Machine learning systems are being used in more applications such as spam email filtering, intrusion detection systems etc. One particular area where it is gaining popularity recently is predictive policing in which the system uses machine learning algorithms to predict where crimes are likely to happen so police can send patrol cars to these places. Contrast to common perceptions, machine learning systems can be vulnerable to various attacks. In this project, we focus on adversarial machine learning in predictive policing. We first look at two of the most successful models used in predictive policing: the self-exciting point process model and the Series Finder model. We will then examine the different attack techniques that a malicious adversary can use to attack the models. These techniques can be categorised into causative attacks and exploratory attacks where causative attacks generally require control over the training process whereas exploratory attacks usually focus on probing the inner state of the learner. A kernel density estimation based model has been built and attacks were set against it. We will also discuss the techniques that can be used to mitigate effects of the attacks. Following that, we will provide some reflections and guidance on future work. In the end, we conclude that contrast to common perceptions, machine learning models can be vulnerable to various attacks.

About

attacking machine learning models

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published