-
Create two services at IBM Waston: Speech-to-Text & Tone-Analyzer
-
Create
config.js
in theserver
directory and put credentials in theconfig.js
as follows.
const config = {
'speech_to_text': [{
'credentials': {
'url': ''
'iam_apikey': ''
}]
}],
'tone_analyzer': [{
'credentials': {
'url': '',
'iam_apikey': ''
},]
}]
}
module.exports = config;
- Generate Your Own Trusted LocalHost Certificate and put them at
server/src/keys/
to enable https at the localhost
openssl req -x509 -out localhost.crt -keyout localhost.key \
-newkey rsa:2048 -nodes -sha256 \
-subj '/CN=localhost' -extensions EXT -config <( \
printf "[dn]\nCN=localhost\n[req]\ndistinguished_name = dn\n[EXT]\nsubjectAltName=DNS:localhost\nkeyUsage=digitalSignature\nextendedKeyUsage=serverAuth")
cd client
npm install
npm start
cd server
npm install
npm start
- redesign the radial chart
- redesign the text cloud
- redesign the flow chart
- implement the control panel
I don't have enough time and computing resources to train models for the facial emotion prediction and speech emotion recognition so I borrowed existing services.
The emotion model and classifier, as well as the landmark tracker are from auduno/clmtrackr. The speech-to-text & tone analyzer are from IBM Waston.
Feel free to implement anything from the roadmap, submit pull requests, create issues, discuss ideas or spread the word.
MIT © Yuan Chen