You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm working on repurposing the emotion detect demo into a Nuxt (VueJS framework) project. I get an initial error I'm unsure about:
Furthermore, ctrack.getCurrentPosition() always returns false for me, and the results from meanPredict are always non-zero. What am I doing wrong here?
<template><sectionclass="section"><videoplaysinlinemutedwidth="700"height="500"ref="userMediaVideo"id="video"></video><canvasref="canvasEl"id="canvas"width="700"height="500"></canvas><divid="emotions"><p>
Angry:
<spanref="angry">0</span></p><p>
Disgusted:
<spanref="disgusted">0</span></p><p>
Fear:
<spanref="fear">0</span></p><p>
Sad:
<spanref="sad">0</span></p><p>
Surprised:
<spanref="surprised">0</span></p><p>
Happy:
<spanref="happy">0</span></p></div></section></template><script>exportdefault{mounted(){constvideo=this.$refs.userMediaVideo;constcanvasEl=this.$refs.canvasEl;constvideoWidth=video.width;constvideoHeight=video.height;constcontext=canvasEl.getContext("2d");constconstraints=(window.constraints={audio: false,video: {facingMode: "user"}});constctrack=newclm.tracker({useWebGL: true});constclassifier=newemotionClassifier();constemotionData=classifier.getBlank();lettrackingStarted=false;// set eigenvector 9 and 11 to not be regularized. This is to better detect motion of the eyebrowspModel.shapeModel.nonRegularizedVectors.push(9);pModel.shapeModel.nonRegularizedVectors.push(11);// init clmtracker with the face model (static/libs/clmtrackr/model_pca_20_svm.js)ctrack.init(pModel);// initialize the classifier with the emotion model (static/libs/clmtrackr/emotionmodel.js)classifier.init(emotionModel);// try connecting to the webcamtry{compatibility.getUserMedia(constraints,stream=>{// set video sourcetry{video.srcObject=compatibility.URL.createObjectURL(stream);}catch(error){video.srcObject=stream;}// start trackingctrack.start(video);trackingStarted=true;// requestanimationframe loop to draw the face and predict emotioncompatibility.requestAnimationFrame(play);},error=>{alert("WebRTC not available");});}catch(error){alert(error);}constplay=()=>{compatibility.requestAnimationFrame(play);context.clearRect(0,0,videoWidth,videoHeight);if(video.paused){video.play();}// Draw video frames on canvasif(video.readyState===video.HAVE_ENOUGH_DATA&&video.videoWidth>0){canvasEl.width=video.videoWidth;canvasEl.height=video.videoHeight;context.drawImage(video,0,0,canvasEl.clientWidth,canvasEl.clientHeight);console.log(ctrack.getCurrentPosition());// if we have a current positionif(ctrack.getCurrentPosition()){ctrack.draw(canvasEl);}constcp=ctrack.getCurrentParameters();conster=classifier.meanPredict(cp);if(er){this.$refs.angry.innerText=`${er[0].value}`;this.$refs.disgusted.innerText=`${er[1].value}`;this.$refs.fear.innerText=`${er[2].value}`;this.$refs.sad.innerText=`${er[3].value}`;this.$refs.surprised.innerText=`${er[4].value}`;this.$refs.happy.innerText=`${er[5].value}`;}}};}};</script><stylescoped>
.section {position: relative;height: 100vh;width: 100vw;}
#video,
#canvas {position: absolute;left: 0;top: 0;height: 500px;width: 700px;bottom: 0;margin: auto;margin-left: 20px;border: 1pxsolidblack;}
#video {opacity: 0;}
#emotions {position: absolute;top: 0;bottom: 0;right: 0;margin: auto;height: 170px;width: 320px;}</style>
The text was updated successfully, but these errors were encountered:
Hello,
I'm working on repurposing the emotion detect demo into a Nuxt (VueJS framework) project. I get an initial error I'm unsure about:
Furthermore,
ctrack.getCurrentPosition()
always returns false for me, and the results frommeanPredict
are always non-zero. What am I doing wrong here?The text was updated successfully, but these errors were encountered: