STEALING UR FEELINGS
Stealing Ur Feelings is a web-based interactive documentary that reveals how Snapchat can use your face to secretly collect data about your emotions.
Using a combination of filmed content, augmented reality and game mechanics, we'll explore the wild science of machine learning-based facial feature tracking, demystify the algorithms that determine if you're happy or sad, and show you how corporations can correlate your emotions with the content you consume to do some Not Very Nice Things.
We're gonna have a slapping soundtrack, too.
Stealing Ur Feelings began life as an application for Mozilla's 2018 awards for art and advocacy exploring artificial intelligence. This repository is a living open workspace. Check back often for updates!
👀 check these out first
- interactive tech demo (requires a computer with a webcam)
⬅️ ⬅️ ⬅️
- wireframe mockups
- initial funding concept
- full application (coming soon)
- slides from the 10/24/2018 Pecha Kucha talk at London's Royal Society of Arts
- film script (coming soon)
10/24/2018 We won
07/12/2018 Submitted full application to Mozilla
07/08/2018 The interactive tech demo is live!
07/06/2018 Registered domain name: stealingurfeelin.gs
06/29/2018 Original funding concept accepted by Mozilla - we've been invited to submit a full application!
💪 challenges + approach
A project like Stealing Ur Feelings presents a unique set of creative and engineering challenges. This section describes our solutions and techniques.
Facial landmark detection and emotion recognition
To create our Snapchat-style AR filter and perform emotion recognition in the browser, we'll need to implement Constrained Local Models, a recent breakthrough in the field of machine learning-based computer vision. Though we may ultimately write our own implementation (as we've done previously with the Viola-Jones framework), there are some open source libraries that seem very promising. Two of the most popular such libraries are clmtrackr and Dlib. We used clmtrackr for our interactive tech demo and the results were excellent.
Frame-accurate video sync
The web video API has no method to accurately return the current frame position of a video element, which makes it difficult to create synchronized keyframe events. Mathematical solutions exist, but they desynchronize due to floating point rounding errors. Instead, we'll use the optical framecode system developed for our previous interactive film, Weird Box. For this method, we actually embed tiny barcode-like images in our video which contain the binary representation of each frame number.
🚧 engineering hit list
run a Dlib wasm experiment
reduce framecode bit depth to 16 bits
make framecode system use typedarrays instead of hacky integer bit parsing
try functional keyframe events system?
make everything go full screen and responsive
mobile benchmarking and optimization
📝 todo submit initial funding concept create wireframe mockups make a tech demo submit full grant application
write/finalize film script
domain name exploration + registration
film optical framecode pipeline
produce AR filter gfx assets
user testing, feedback + iteration
produce final film titles + gfx assets
film VO record/final audio mix
film final color grade
film final conform
user acceptance testing
release strategy/marketing/festival + awards submissions