Machine learning systems are being used in more applications such as spam email filtering, intrusion detection systems etc. One particular area where it is gaining popularity recently is predictive policing in which the system uses machine learning algorithms to predict where crimes are likely to happen so police can send patrol cars to these places. Contrast to common perceptions, machine learning systems can be vulnerable to various attacks. In this project, we focus on adversarial machine learning in predictive policing. We first look at two of the most successful models used in predictive policing: the self-exciting point process model and the Series Finder model. We will then examine the different attack techniques that a malicious adversary can use to attack the models. These techniques can be categorised into causative attacks and exploratory attacks where causative attacks generally require control over the training process whereas exploratory attacks usually focus on probing the inner state of the learner. A kernel density estimation based model has been built and attacks were set against it. We will also discuss the techniques that can be used to mitigate effects of the attacks. Following that, we will provide some reflections and guidance on future work. In the end, we conclude that contrast to common perceptions, machine learning models can be vulnerable to various attacks.
-
Notifications
You must be signed in to change notification settings - Fork 0
ZeanQin/adversarial-machine-learning
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Folders and files
Name | Name | Last commit message | Last commit date | |
---|---|---|---|---|
Repository files navigation
About
attacking machine learning models
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published