The following labs showcase the capabilities of the TJBot open-source robot. Each lab shows step-by-step instructions to use a combination of Watson services and hardware to make the robot come alive.
You can choose to either use Node-RED or Node.js to run these labs on a physical TJBot, or use the browser-based TJBot simulator to run the Node.js labs. An free IBM Cloud account is required to use the IBM Watson capabilities.
For instructions on setting up the Raspberry Pi, upgrading Node-RED, and installing the Node-RED nodes needed for these labs, please refer to this Medium post.
These labs use the tjbot NPM library. To install the library, run the command:
npm install tjbot
Note: you may need to run the code as root so that TJBot can access the hardware.
sudo node app.js
These labs can also be run using the online TJBot Simulator. A web-browser (Chrome, Firefox) with access to the camera, microphone, and speaker is all that's needed.
Access the simulator at ibm.biz/meet-tjbot.
Lab Resources: Node-RED | Node.js
Uses: Microphone, Speaker, Watson Speech to Text, Watson Conversation, Watson Text to Speech
Train TJBot to listen to phrases, understand natural language intents and entities, and speak out responses. Uses an example Conversation workspace that explains what the TJBot is and some of the components of the project.
Lab Resources: Node-RED | Node.js
Uses: Microphone, LED, Watson Speech to Text, Watson Tone Analyzer
Train TJBot to listen to phrases and analyze the emotional tone using Watson Tone Analyzer. Depending on which emotion is most prevalent in the phrase, the LED will change to represent that emotion.
Lab Resources: Node-RED | Node.js
Uses: Camera, Speaker, Watson Visual Recognition, Watson Text to Speech
Train TJBot to take a photo with the Raspberry Pi, classify it with Watson Visual Recognition, and speak what objects and colors are seen with Watson Text to Speech and the speaker.
This code is licensed under Apache 2.0. Full license text is available in LICENSE.