Skip to content
Putting a face to a hash
Python JavaScript C# CSS HTML Dockerfile
Branch: master
Clone or download
Latest commit 3b18f3e Aug 29, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
src Add query param to identify hover link requests Aug 22, 2019

Check Face

Putting a face to a hash

Winner of Facebook Hack Melbourne 2019

Facebook's Hackathon at Facebook Hack Melbourne 2019

Who uses checksums? We all know we should.

A range of unused tools exist for verifying file integrity that suffer from poor adoption, are difficult to use and aren't human-friendly. Humans are inherently good at remembering interesting information, be it stories, people and generally benefit from context. Most humans also have the ability to remember faces extremely well, with many of us experiencing false-positives or pareidolia - seeing faces as a part of inanimate objects.

With the advent of hyper-realistic Style transfer GAN's like Nvidia's StyleGAN, we can generate something that our brains believe is a real person, and make use of that human-hardware accelerated memorisation and let people compare between hashes they've seen, potentially even weeks apart, with only a few quick glances.

CheckFace Face
This generated face is an example of what you could expect to see next to your file's checksum or your git commit sha.

How to use CheckFace

First, use the Chrome Extension to generate the face for the hash in a web environment as is shown
Download Checkface

Once downloaded, verify the CheckFace by using the Context-Menu Extension to generate another checkface as shown below
Download Checkface

You should already know if they're the same! EASY

Our Stack

  • Nvidia StyleGAN
    • Tensorflow
  • Docker
    • Nvidia Docker runtime
  • Flask
  • GitHub Pages
  • Chrome Web Extension
  • Winforms Application
  • CloudFlare


  • Chrome Extension Context Menu
  • Electron App Context Menu
  • Backend API running a Dockerized Nvidia Stylegan on Flask
  • Project Webpage

Chrome Extension

The /src/extension directory holding the manifest file can be added as an extension in developer mode in its current state.

Open the Extension Management page by navigating to chrome://extensions. The Extension Management page can also be opened by clicking on the Chrome menu, hovering over More Tools then selecting Extensions. Enable Developer Mode by clicking the toggle switch next to Developer mode. Click the LOAD UNPACKED button and select the extension directory.

How to load extension in chrome with developer mode

Load Extension

Ta-da! The extension has been successfully installed. Because no icons were included in the manifest, a generic toolbar icon will be created for the extension.

(Sourced: Chrome Developer)

Windows Explorer File Context Menu

Download and install the latest release. Right click any file and choose from a number of hash algorithms to see its checkface. We recommend using SHA256.

using explorer file context menu windows desktop app

Electron App

Build and run from source only at the moment.

Backend API

Request images at

Prerequisites to run the backend server

  • GPU with sufficient VRAM to hold the model
  • Nvidia Docker runtime (only supported on Linux, until HyperV adds GPU passthrough support)

For running a backend we have used an AWS p3 instance on ECS, or g3s.xlarge via docker-machine for testing.

Project Webpage

Simple pure Javascript based bootstrap webpage. Upload to anything that serves static files


Chrome Extension


Windows Desktop Application

Open src/dotnet-windows/checkface-dotnet.sln in Visual Studio.

To use as explorer shell extension, you will need to sign the assembly.

Use SharpShell ServerManager to load the project output checkface-dotnet.dll in a test shell.

Electron App

cd ./src/electron
yarn install
yarn run dev ./

Build installer using

yarn run build

Help needed to set up auto updating and registering in file context menu.

Backend API

We rely on Nvlabs StyleGAN to run our inference, using the default model. First ensure you have

Instructions for installing the

Best practice is to first create a virtualenv, followed by installing the requirements

  1. Run virtualenv venv in the project directory
  2. Activate the venv ./venv/Scripts/activate.bat
  3. Install the requirements pip install -r requirements.txt
  4. Run the backend with python src/server/

System requirements

All you really need is a CUDA GPU with enough VRAM to load the inference model, which has been tested to work on a GTX 1080 with 8GB of VRAM, with NVIDIA driver 391.35.


Our work is based on a combination of original content and work adapted from Nvidia Labs StyleGAN under the Creative Commons Attribution-NonCommercial 4.0 International License. Anything outside of the src/server dir is original work, and a diff can be used to show the use of the dnnlib and StyleGAN model inside of this directory.

The inference model was trained by Nvidia Labs on the FFHQ dataset, please refer to the Flickr-Faces-HQ repository.

You can’t perform that action at this time.