This project focuses on building hybrid application which converts the blinking of eyes to text. It implements Face Landmark Detection from Tensorflow.js to capture movement of the eyes, and uses Morse Code to translate short and long blinks to alphabet characters.
Technologies used:
- Frontend: Vue.js (Pinia for state management) + SCSS (BEM).
- ML: Tensorflow.js.
- Hybrid deployment: Monaca.
blink_to_text_preview.mp4
There is a tutorial available in Medium: Recognising Eye Blinking With Tensorflow.js
- Download the project.
- Run
npm install
in the directory. - Run
npm run dev
to start the project. - If the browser opens url 0.0.0.0:8080, change it to localhost:8080.
- Wait until the model loads.
- When you see yourself, click Start Capturing button.
- You have 7 seconds to blink the sequence you want.
- If the converted letter is wrong, delete it with Remove Letter.
blinkPrediction.js
- Tensorflow.js model and prediction logichybridFunctions.js
- functions for loading camera on browser/mobilemorseCodeTable.js
- Morse Code dictionaryblinkStore.js
- Pinia store containing state of the app
LoadingPage.vue
- first screen that user sees while loading the modelMorseCodePage.vue
- helper screen to see Morse Code tablePredictingPage.vue
- main screen where predicting is happening
There is a webpack bundler setup. It compiles and bundles all "front-end" resources. You should work only with files located in /src
folder. Webpack config located in script/webpack.config.js
.
Webpack has specific way of handling static assets (CSS files, images, audios). You can learn more about correct way of doing things on official webpack documentation.