No description, website, or topics provided.
Switch branches/tags
Nothing to show
Clone or download
Latest commit 86db22d Nov 14, 2018
Failed to load latest commit information.
data original commit Nov 9, 2018
element-frame-based Update Nov 9, 2018
external Update Nov 9, 2018
images add attack gif Nov 9, 2018
models original commit Nov 9, 2018
page-based Update Nov 14, 2018
videos add demo Nov 9, 2018
.gitignore better figure Nov 9, 2018
.gitmodules original commit Nov 9, 2018
LICENSE Initial commit Nov 5, 2018 Update Nov 9, 2018
requirements.txt original commit Nov 9, 2018

Ad-versarial: Defeating Perceptual Ad-Blocking

This repository contains code to create, evaluate, and attack various types of Perceptual Ad-Blockers.

Our results are described in the following paper:

Ad-versarial: Defeating Perceptual Ad-Blocking
Florian Tramèr, Pascal Dupré, Gili Rusak, Giancarlo Pellegrino, and Dan Boneh


Perceptual ad-blocking was recently proposed as a novel, more robust, way of automatically detecting online ads, by relying on visual cues associated with ads, in the same way a human user would (see The Future of Ad Blocking: An Analytical Framework and New Techniques).

This idea has recently attracted the attention of Adblock Plus, who unveiled Sentinel, a prototype neural network that detects ads in Facebook screenshots. We trained a similar model on screenshots from hundreds of different news websites. As shown below, it does a pretty good job of locating ads (here on an article from The Guardian):

A video demonstrating our model in action while browsing real websites is here, or check out the GIF below.

The goal of our work is to show that while sound in principle, perceptual ad-blocking can be easily defeated when instantiated with current computer vision techniques. Specifically, we create adversarial examples for ad-detection classifiers that allow web publishers or ad networks to evade and detect perceptual ad-blocking. We construct adversarial examples both for traditional computer vision algorithms (e.g., perceptual hashing, SIFT or OCR) aimed at detecting ad-disclosure cues such as the AdChoices logo, as well as for deep neural networks such as Sentinel that find ads in rendered web content.

As an example, the below images are respectively a standard AdChoices logo (left), an adversarial example for SIFT (middle) which can be used to evade ad-blocking while still disclosing ads to users, and a mostly invisible false positive for SIFT (right) which could be used as a "honeypot" to detect ad-blocking.

AdChoices logo Adversarial example for SIFT False positive for SIFT
Adchoices Adchoices Adv Adchoices FP

For perceptual ad-blockers like Sentinel that operate on full webpage screenshots, crazier attacks are possible. In the below mock Facebook screenshot, Jerry uploaded a perturbed image that causes the ad-blocker to block Tom's content instead:

We also show how to evade and detect such ad-blockers. The GIF below shows the ad-blocker locating ads in a New York Times article (left), and an attack (right) where the web publisher adds an adversarial transparent overlay over the page to evade ad-blocking.


Our attacks and evaluations use python 3. The main requirements are OpenCV (version 3.4.1), TensorFlow and Keras. All requirements can be installed by running

pip install -r requirements.txt

Training, Evaluating and Attacking Perceptual Ad-Blockers

Pre-trained models as well as data used for training and evaluating attacks can be found here: The data is expected to be placed under data and the pre-trained models under models.

The subdirectory element-frame-based contains implementations and attacks for what we call "element-based" and "frame-based" perceptual ad-blockers. These do not operate over full rendered web-pages (as Sentinel does), but first segment a webpage into smaller fragments to be classified. See the README for detailed information.

The subdirectory page-based contains our implementation of a "page-based" perceptual ad-blocker similar to Sentinel, which we trained to locate ads on arbitrary websites. A video demonstrating it in action can be found here. See the README for detailed information on training, evaluating and attacking this model.