Exercises to learn how to fuzz with American Fuzzy Lop
Switch branches/tags
Nothing to show
Clone or download
Type Name Latest commit message Commit time
Failed to load latest commit information.
challenges Heartbleed readme formatting Dec 18, 2017
docker Fix typo Nov 7, 2017
harness Improve harness instructions Nov 7, 2017
quickstart Minor quickstart improvements Nov 6, 2017
.gitignore Change text files to markdown format Jul 3, 2017
LICENSE Add 4 challenges, quickstart, docker env Jun 28, 2017
README.md Add a harness creation walkthrough Nov 6, 2017


Fuzzing with AFL workshop

Materials of the "Fuzzing with AFL" workshop by Michael Macnair (@michael_macnair).

The first public version of this workshop was presented at SteelCon 2017.


  • 3-4 hours (more to complete all the challenges)
  • Linux machine
  • Basic C and command line experience - ability to modify and compile C programs.
  • Docker, or the dependencies described in quickstart.


  • quickstart - Do this first! A tiny sample program to get started with fuzzing, including instructions on how to setup your machine.
  • harness - the basics of creating a test harness. Do this if you have any doubts about the "plumbing" between afl-fuzz and the target code.
  • challenges - a set of known-vulnerable programs with fuzzing hints
  • docker - Instructions and Dockerfile for preparing a suitable environment, and hosting it on AWS if you wish.

See the other READMEs for more information.


Challenges, roughly in recommended order, with any specific aspects they cover:

  • sendmail/1301 - parallel fuzzing
  • heartbleed - fuzzing with ASAN
  • date - fuzzing environment variable input
  • ntpq - fuzzing a network client; coverage analysis and increasing coverage
  • cyber-grand-challenge - an easy vuln and an example of a hard to find vuln using afl
  • sendmail/1305 - persistent mode difficulties

The challenges have HINTS.md and ANSWERS.md files - these contain useful information about fuzzing different targets even if you're not going to attempt the challenge.

All of the challenges use real vulnerabilities from open source projects (the CVEs are identified in the descriptions), with the exception of the Cyber Grand Challenge extract, which is a synthetic vulnerability.


Published via Google docs. There is extra information in the speaker notes (Options / Open speaker notes)