A Node-RED based application to make your TJbot see and recognize the world.
- Takes a picture.
- Sends the picture to the Watson Visual Recognition service.
- Analyzes/classifies the picture and sends back possible classes.
- Displays the result and verbalizes this action as well using the Watson Text to Speech service.
- Raspberry Pi 3
- Raspberry Pi camera module
- Speaker with 3.5mm audio jack
- IBM TJBot: You can 3D print or laser cut the robot
Follow the step-by-step instructions on instructables to assemble and prepare your RaspberryPi/TJBot to run the code.
At first check if Node-RED is already installed on your Pi. Since November 2015 release of Raspbian Jessie NodeREDcomes preinstalled on the os image. If not, open a terminal application on the Pi and execute the following commands to install the latest version of Node-RED and npm (Node Package Manager):
sudo apt-get update
sudo apt-get dist-upgrade
curl sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs
sudo npm install -g node-red
You can troubleshoot here.
To upgrade you're already installed version see: Running on Raspberry Pi
Execute the following commands from a terminal to install the collection of Node-RED nodes for IBM Watson Services:
cd ~/.nodered
npm install node-red-node-watson
You will then need to restart Node-RED. To start Node-RED, run the command node-red-start
. To stop Node-RED, run the command node-red-stop
.
You have installed Node-RED as a global npm package, so you can execute the following commands from a terminal to start Node-RED:
node-red
After Node-RED has started, you can access the browser-based flow editor at http://localhost:1880 with a browser on your Pi.
Clone or download the repository to get the sample flow:
git clone git@github.com:samuelvogelmann/visualtj.git
cd visualtj
Copy the content of the flow.json file to clipboard. Go to http://localhost:1880 and import the flow using the import function. In the upper right corner click on the menu bar, then Import > Clipboard to import the flow:
Paste the sample flow into the Paste nodes here field and click Import.
In this step, you get API access to the Watson services used in this recipe:
- Watson Visual Recognition Service
- Watson Text to Speech Service
(If you don't have a Bluemix account, follow the instructions to create a free trial account.)
At first you have to create a Visual Recognition instance on Bluemix: https://console.ng.bluemix.net/catalog/services/visualrecognition.
You can leave the default values and select Create. Now go to Sevice Credentials on the left menu and copy your api_key into clipboard.
Then you need to update the Visual Recognition node within your flow with your Watson Visual Recognition credentials:
The last step is the Watson Text to Speech service. You need to do the exact same thing you did with the Visual Recognition service. You may leave all the default values and select Create. Copy your credentials and add them to Text to Speech node .
Finally click the red Deploy button in the upper right corner to deploy your flow. Now you can access the application at http://localhost:1880/visualtj and start taking a picture and analyze it with the IBM Watson Visual Recognition Service.
There are a few things you can do and ways to take your robot forward:
- Create a custom classifier and train the visual recognition service to improve its classification capabilites. This tutorial and the documentation will help you creating your own custom classifier.
To contribute just fork the repository and send in a pull request. If you find any issues, feel free to open up a github issue.
- Watson Developer Cloud: Watson Visual Recognition and Watson Text to Speech
- Node-RED: Flow-based programming for the Internet of Things
- Node-Red nodes for Watson services: A collection of Node-RED nodes for IBM Watson services
MIT License