Let's interact with virtual agent in the screen. Here's some procedure of how to create a roadmap and how to run this system.
First download this repository git clone https://github.com/shiroshanshan/generate-motion-from-roadmap.git
Then run python server/server.py
to start server. You can set flag -p
for plotting sampled data and -w
for writing sampled data.
Now you can access https://127.0.0.1:5000 to interact with the agent.
Classified by Neural Network and predict by [free-energy principle](docs/free energy.pdf).
- Gaze detection
- Emotion detection
- voice recognition (implementing)
- run
python scripts/clear_output.py
to clear all of the output during runtime. - run
python scripts/txt2csv.py
to write out mmd csv file from joint angles. - run
python scripts/create_routes.py
everytime roadmap be modified. - run
python script/mmd2ibk.py
to convert mmd csv file to ibuki csv file. - run
python script/plot/animation.py
to write out gif animation from joint angles (decrease the dimension to 2 by PCA).
Thanks to ArashHosseini, 3d-pose-baseline-vmd and errno-mmd.