Skip to content
A simple implementation of Google's Quick, Draw Project for humans.
Branch: master
Clone or download
Latest commit 31559bc Oct 12, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information.
QuickDraw initial commit Oct 6, 2018
qd_emo initial commit Oct 6, 2018 initial commit Oct 6, 2018 initial commit Oct 6, 2018 Update Oct 8, 2018
QuickDraw.h5 initial commit Oct 6, 2018 initial commit Oct 6, 2018
requirements.txt adding requirements.txt Oct 6, 2018

Quick, Draw

Can a neural network learn to recognize doodling? Quick, Draw

Code Requirements

You can install Conda for python which resolves all the dependencies for machine learning.

pip install requirements.txt


Quick, Draw! is an online game developed by Google that challenges players to draw a picture of an object or idea and then uses a neural network artificial intelligence to guess what the drawings represent. The AI learns from each drawing, increasing its ability to guess correctly in the future.The game is similar to Pictionary in that the player only has a limited time to draw (20 seconds).The concepts that it guesses can be simple, like 'foot', or more complicated, like 'animal migration'. This game is one of many simple games created by Google that are AI based as part of a project known as 'A.I. Experiments'. Quick, Draw


Follow the documentation here to get the dataset. I got .npy files from google cloud for 14 drawings.

  1. Apple 🍎
  2. Bowtie 🎀
  3. Candle 🕯️
  4. Door 🚪
  5. Envelope ✉️
  6. Fish 🐟
  7. Guitar 🎸
  8. Ice Cream 🍦
  9. Lightning ⚡
  10. Moon 🌛
  11. Mountain 🗻
  12. Star ⭐️
  13. Tent ⛺️
  14. Toothbrush 🧹
  15. Wristwatch ⌚️

Python Implementation

  1. Network Used- Convolutional Neural Network

If you face any problem, kindly raise an issue


  1. Get the dataset as mentioned above and place the .npy files in /data folder.
  2. First, run which will load the data from the /data folder and store the features and labels in pickel files.
  3. Now you need to have the data, run which will load data from pickle and augment it. After this, the training process begins.
  4. Now you need to have the data, run which will use use the webcam to get what you have drawn.
  5. For altering the model, check
  6. For tensorboard visualization, go to the specific log directory and run this command tensorboard --logdir=. You can go to localhost:6006 for visualizing your loss function and accuracy.

Mergerd to Google's git repo

See the pull request here


You can’t perform that action at this time.