Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Replicating TUG in gazebo #280

Merged
merged 18 commits into from
May 12, 2020
Merged

Replicating TUG in gazebo #280

merged 18 commits into from
May 12, 2020

Conversation

vvasco
Copy link

@vvasco vvasco commented May 8, 2020

This PR replicates the TUG in gazebo.
We can handle:

  • the verbal interaction between the robot and the human model:
    • when pressing the wifi button, the human model / the robot (depending which one is moving) stops and a question can be asked;
    • a question is simulated through the question-simulator.sh script. We can choose 4 keywords, specifically speed,repetition,aid,feedback, each associated with two or three questions (as defined here). The script randomly selects a question associated to the chosen keyword and sends it to googleSpeechProcess, which analyses the sentence;
    • when googleSpeechProcess provides the output to managerTUG, the human model / the robot starts again.
  • following the human model through navigation: when the TUG starts, the robot starts tracking the skeleton by moving its base;
  • synchronization between robot commands and human animations: the human model seats when the robot asks to seat and the human model stands up, reaches the target, goes back and seats again when the robot gives the start;
  • TUG failures:
    • we can provide a target to reach different than the finish line: the robot says that the test was not passed;
    • the human model can be paused during the TUG: the robot encourages the human model to finish the test, reminding to push the button to ask questions. If the human model does not continue the test, the test is not passed.

The following videos show the described functionalities (click on the images to open the videos):

  • Successful TUG and questions:

normal

  • Target not reached:

target-not-reached

  • Human model stopping:

paused

@vvasco vvasco added the 🚀 enhancement New feature or request label May 8, 2020
@vvasco vvasco added this to the TUG-19 milestone May 8, 2020
@vvasco vvasco requested a review from pattacini May 8, 2020 18:08
@vvasco vvasco self-assigned this May 8, 2020
Copy link
Member

@pattacini pattacini left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

OMG, that's simply awesome! ❤️

To be definitely merged after CI completes (keep an eye at Codacy and BCH failures if they're related) 🚀

For crafting the corresponding documentation, I would consider remaking the videos with less background noise and more machines in order to speed up the avatar's movements. This way, we would be able to bundle that material as a showcase to FDG. Also, R1's home position could be pushed closer to the chair, what do you think?

Again, superb ✨

@vvasco
Copy link
Author

vvasco commented May 9, 2020

Having more machines would definitely help. For the documentation I will remake the videos as you suggested.
Regarding R1's position, I put it a bit far from the chair to be sure that the skeleton remains in the FOV when moving closer to the robot (I have to solve the issue with the skeletonLocker, but once done I'll move it closer).

I'll have a look to the CI failures.

@vvasco vvasco force-pushed the feat/demo-tug-sim branch 8 times, most recently from feecfe3 to 17269ba Compare May 11, 2020 14:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🚀 enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants