Flexatar is our format of storing 3d model of human face, capable of online animating in browser in realtime using WebGL.
To crearte the minor version of your own flexatar, 5 photos is enough. The better quality can be achieved by using our mobile SDK, that creates teeth etc. Additionally, you can create a hybrid flexatar, mixing two or more images to create a single object.
We can export flexatar as a conventional .obj file with 3d model and textures.
The key advantage of flexatar technique is that it can be used to animate user's audio track from microphone, thus acting as a virtual webcam for WebRTC. We are planning to commit integration examples to leading WebRTC SFU's like Janus, Livekit etc. Feel free to offer candidates.