A simple library for creating ventriloquist dummy effects in canvas – because that's what everyone needs, right?
Simply define an image, a “mouth region” and where the “mouth” should move to, then bind to an HTML input or a Web Audio API MediaStreamAudioSourceNode and watch the magic happen.
Works best when connected to a live microphone feed using WebRTC.
I built this for a presentation at SydJS (thus it has limited browser support) and do not really expect it to have much use in the wild. If you can find a practical use for it, let me know.
NOTE: I built these demos to work in Chrome only, as it was the only browser at the time that supported both the Web Audio API and getting a microphone stream via
navigator.getUserMedia(). However, it’s been updated to use non-prefixed APIs so it should work in any browser that supports them.
- Basic demo - hard-coded image and mouth co-ordinates
- Full demo - upload an image and draw the mouth region yourself
Erm, there isn’t any proper documentation right now, because I threw this together in a hurry. See also my lack of faith of this having any practical use.
However, I’ve thrown together a basic in-code API usage example in the
api-demo.js file of this repository.
The canvas drawing code and HTML input binding should be supported in any decent modern browser that supports ECMAScript 5 (though I haven’t actually tested this theory, so there might be some surprises).
Binding to a live microphone stream is only available in Chrome 24+ with the
Web Audio Input flag enabled in
chrome://flags. Firefox 18+ allows microphone access, but support for the Web Audio API is still a work in progress.