Skip to content

Commit

Permalink
Merge commit '6bb4f14ccc20f4dd477898104fc16c318161e4a7'
Browse files Browse the repository at this point in the history
* commit '6bb4f14ccc20f4dd477898104fc16c318161e4a7':
  [FIX] Bug in JeelizThreeJSHelper with Y offset when rotating head around X with different camera aspectRatios - see jeeliz#114
  [QUAL] Add set_videoOrientation method - jeeliz#113
  [FIX] repair toogle_slow() method - jeeliz#112
  [FIX] Correct a bug with MacOSX10.15 (Catalina) beta (set lame WebGL2)
  [DOC] add default alphaRange value in readme
  [FIX] add neural network model to fix this issue: jeeliz#85 , waiting for a real fix of the graphic driver...
  • Loading branch information
ThorstenBux committed Sep 1, 2019
2 parents bd09104 + 6bb4f14 commit c863c1c
Show file tree
Hide file tree
Showing 7 changed files with 307 additions and 302 deletions.
14 changes: 9 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -226,7 +226,7 @@ JEEFACEFILTERAPI.init({
//all other settings will be useless
//it means that you fully handle the video aspect

'deviceId' //not set by default
'deviceId' //not set by default
'facingMode': 'user', //to use the rear camera, set to 'environment'

'idealWidth': 800, //ideal video width in pixels
Expand All @@ -235,7 +235,8 @@ JEEFACEFILTERAPI.init({
'maxWidth': 1280, //max video width in pixels
'minHeight': 480, //min video height in pixels
'maxHeight': 1280, //max video height in pixels,
'rotate': 0 //rotation in degrees possible values: 0,90,-90,180
'rotate': 0, //rotation in degrees possible values: 0,90,-90,180
'flipX': false //if we should flip horizontally the video. Default: false
},
```
* `<dict> scanSettings`: override face scan settings - see `set_scanSettings(...)` method for more information.
Expand Down Expand Up @@ -286,7 +287,7 @@ After the initialization (ie after that `callbackReady` is launched ) , these me

* `JEEFACEFILTERAPI.resize()`: should be called after resizing the `<canvas>` element to adapt the cut of the video,

* `JEEFACEFILTERAPI.toggle_pause(<boolean> isPause)`: pause/resume,
* `JEEFACEFILTERAPI.toggle_pause(<boolean> isPause)`: pause/resume. This method will completely stop the rendering/detection loop,

* `JEEFACEFILTERAPI.toggle_slow(<boolean> isSlow)`: toggle the slow rendering mode: because this API consumes a lot of GPU resources, it may slow down other elements of the application. If the user opens a CSS menu for example, the CSS transitions and the DOM update can be slow. With this function you can slow down the rendering in order to relieve the GPU. Unfortunately the tracking and the 3D rendering will also be slower but this is not a problem is the user is focusing on other elements of the application. We encourage to enable the slow mode as soon as a the user's attention is focused on a different part of the canvas,

Expand Down Expand Up @@ -314,10 +315,12 @@ After the initialization (ie after that `callbackReady` is launched ) , these me
* `[<float> minValue, <float> maxValue] translationFactorRange`: multiply `k` by a factor `kTranslation` depending on the translation speed of the head (relative to the viewport). `kTranslation=0` if `translationSpeed<minValue` and `kTranslation=1` if `translationSpeed>maxValue`. The regression is linear. Default value: `[0.0015, 0.005]`,
* `[<float> minValue, <float> maxValue] rotationFactorRange`: analogous to `translationFactorRange` but for rotation speed. Default value: `[0.003, 0.02]`,
* `[<float> minValue, <float> maxValue] qualityFactorRange`: analogous to `translationFactorRange` but for the head detection coefficient. Default value: `[0.9, 0.98]`,
* `[<float> minValue, <float> maxValue] alphaRange`: it specify how to apply `k`. Between 2 successive detections, we blend the previous `detectState` values with the current detection values using a mixing factor `alpha`. `alpha=<minValue>` if `k<0.0` and `alpha=<maxValue>` if `k>1.0`. Between the 2 values, the variation is quadratic.
* `[<float> minValue, <float> maxValue] alphaRange`: it specify how to apply `k`. Between 2 successive detections, we blend the previous `detectState` values with the current detection values using a mixing factor `alpha`. `alpha=<minValue>` if `k<0.0` and `alpha=<maxValue>` if `k>1.0`. Between the 2 values, the variation is quadratic. Default value: `[0.05, 1]`.

* `JEEFACEFILTERAPI.update_videoElement(<video> vid, <function|False> callback)`: change the video element used for the face detection (which can be provided via `VIDEOSETTINGS.videoElement`) by another video element. A callback function can be called when it is done.

* `JEEFACEFILTERAPI.set_videoOrientation(<integer> angle, <boolean> flipX)`: Dynamically change `videoSettings.rotate` and `videoSettings.flipX`. This method should be called after initialization. The default values are `0` and `false`. The angle should be chosen among these values: `0, 90, 180, -90`.


### Optimization

Expand Down Expand Up @@ -383,7 +386,8 @@ We provide several neural network models:
* `dist/NNClight.json`: this is a light version of the neural network. The file is twice lighter and it runs faster but it is less accurate for large head rotation angles,
* `dist/NNCveryLight.json`: even lighter than the previous version: 250Kbytes, and very fast. But not very accurate and robust to all lighting conditions,
* `dist/NNCviewTop.json`: this neural net is perfect if the camera has a bird's eye view (if you use this library for a kiosk setup for example),
* `dist/NNCdeprecated.json`: this is a deprecated version of the neural network (since 2018-07-25).
* `dist/NNCdeprecated.json`: this is a deprecated version of the neural network (since 2018-07-25),
* `dist/NNCIntel1536.json`: neural network working with Intel 1536 Iris GPUs (there is a graphic driver bug, see [#85](https://github.com/jeeliz/jeelizFaceFilter/issues/85))


### Using the ES6 module
Expand Down
100 changes: 49 additions & 51 deletions demos/threejs/VTO/demo.js
Original file line number Diff line number Diff line change
Expand Up @@ -4,74 +4,72 @@ let THREECAMERA;

// callback : launched if a face is detected or lost. TODO : add a cool particle effect WoW !
function detect_callback(faceIndex, isDetected) {
if (isDetected) {
console.log('INFO in detect_callback() : DETECTED');
} else {
console.log('INFO in detect_callback() : LOST');
}
if (isDetected) {
console.log('INFO in detect_callback() : DETECTED');
} else {
console.log('INFO in detect_callback() : LOST');
}
}

// build the 3D. called once when Jeeliz Face Filter is OK
function init_threeScene(spec) {
const threeStuffs = THREE.JeelizHelper.init(spec, detect_callback);
const threeStuffs = THREE.JeelizHelper.init(spec, detect_callback);

// CREATE THE GLASSES AND ADD THEM
const r=JeelizThreeGlassesCreator({
envMapURL: "envMap.jpg",
frameMeshURL: "models3D/glassesFramesBranchesBent.json",
lensesMeshURL: "models3D/glassesLenses.json",
occluderURL: "models3D/face.json"
});
// CREATE THE GLASSES AND ADD THEM
const r = JeelizThreeGlassesCreator({
envMapURL: "envMap.jpg",
frameMeshURL: "models3D/glassesFramesBranchesBent.json",
lensesMeshURL: "models3D/glassesLenses.json",
occluderURL: "models3D/face.json"
});

threeStuffs.faceObject.add(r.occluder);
//r.occluder.rotation.set(-0.15,0,0);
window.re=r.occluder;
r.occluder.rotation.set(0.3,0,0);
r.occluder.position.set(0,0.1,-0.04);
r.occluder.scale.multiplyScalar(0.0084);
threeStuffs.faceObject.add(r.occluder);
r.occluder.rotation.set(0.3,0,0);
r.occluder.position.set(0,0.1,-0.04);
r.occluder.scale.multiplyScalar(0.0084);

const threeGlasses=r.glasses;
//threeGlasses.rotation.set(-0.15,0,0); //X neg -> rotate branches down
threeGlasses.position.set(0,0.07,0.4);
threeGlasses.scale.multiplyScalar(0.006);
threeStuffs.faceObject.add(threeGlasses);
const threeGlasses = r.glasses;
//threeGlasses.rotation.set(-0.15,0,0); //X neg -> rotate branches down
threeGlasses.position.set(0,0.07,0.4);
threeGlasses.scale.multiplyScalar(0.006);
threeStuffs.faceObject.add(threeGlasses);


//CREATE THE CAMERA
const aspecRatio=spec.canvasElement.width / spec.canvasElement.height;
THREECAMERA=new THREE.PerspectiveCamera(40, aspecRatio, 0.1, 100);
//CREATE THE CAMERA
const aspecRatio = spec.canvasElement.width / spec.canvasElement.height;
THREECAMERA = new THREE.PerspectiveCamera(50, aspecRatio, 0.1, 100);
} // end init_threeScene()

//launched by body.onload() :
function main(){
JeelizResizer.size_canvas({
canvasId: 'jeeFaceFilterCanvas',
callback: function(isError, bestVideoSettings){
init_faceFilter(bestVideoSettings);
}
})
JeelizResizer.size_canvas({
canvasId: 'jeeFaceFilterCanvas',
callback: function(isError, bestVideoSettings){
init_faceFilter(bestVideoSettings);
}
})
} //end main()

function init_faceFilter(videoSettings){
JEEFACEFILTERAPI.init({
followZRot: true,
canvasId: 'jeeFaceFilterCanvas',
NNCpath: '../../../dist/', // root of NNC.json file
maxFacesDetected: 1,
callbackReady: function(errCode, spec){
if (errCode){
console.log('AN ERROR HAPPENS. ERR =', errCode);
return;
}
JEEFACEFILTERAPI.init({
followZRot: true,
canvasId: 'jeeFaceFilterCanvas',
NNCpath: '../../../dist/', // root of NNC.json file
maxFacesDetected: 1,
callbackReady: function(errCode, spec){
if (errCode){
console.log('AN ERROR HAPPENS. ERR =', errCode);
return;
}

console.log('INFO : JEEFACEFILTERAPI IS READY');
init_threeScene(spec);
}, //end callbackReady()
console.log('INFO : JEEFACEFILTERAPI IS READY');
init_threeScene(spec);
}, //end callbackReady()

//called at each render iteration (drawing loop) :
callbackTrack: function(detectState){
THREE.JeelizHelper.render(detectState, THREECAMERA);
} //end callbackTrack()
}); //end JEEFACEFILTERAPI.init call
//called at each render iteration (drawing loop) :
callbackTrack: function(detectState){
THREE.JeelizHelper.render(detectState, THREECAMERA);
} //end callbackTrack()
}); //end JEEFACEFILTERAPI.init call
} // end main()

2 changes: 1 addition & 1 deletion dist/NNC.json

Large diffs are not rendered by default.

1 change: 1 addition & 0 deletions dist/NNCIntel1536.json

Large diffs are not rendered by default.

Loading

0 comments on commit c863c1c

Please sign in to comment.