Skip to content
Make your TJbot see and recognize the World:
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.


A Node-RED based application to make your TJbot see and recognize the world.

How it works

  • Takes a picture.
  • Sends the picture to the Watson Visual Recognition service.
  • Analyzes/classifies the picture and sends back possible classes.
  • Displays the result and verbalizes this action as well using the Watson Text to Speech service.

Hardware Requirements and Setup

Follow the step-by-step instructions on instructables to assemble and prepare your RaspberryPi/TJBot to run the code.

Build the Application

Install Node-RED

At first check if Node-RED is already installed on your Pi. Since November 2015 release of Raspbian Jessie NodeREDcomes preinstalled on the os image. If not, open a terminal application on the Pi and execute the following commands to install the latest version of Node-RED and npm (Node Package Manager):

sudo apt-get update
sudo apt-get dist-upgrade
curl sL | sudo -E bash - 
sudo apt-get install -y nodejs
sudo npm install -g node-red

You can troubleshoot here.

To upgrade you're already installed version see: Running on Raspberry Pi

Install IBM Watson Services Nodes

Execute the following commands from a terminal to install the collection of Node-RED nodes for IBM Watson Services:

cd ~/.nodered
npm install node-red-node-watson

You will then need to restart Node-RED. To start Node-RED, run the command node-red-start. To stop Node-RED, run the command node-red-stop.

Start and access Node-RED

You have installed Node-RED as a global npm package, so you can execute the following commands from a terminal to start Node-RED:


After Node-RED has started, you can access the browser-based flow editor at http://localhost:1880 with a browser on your Pi.

Download and Import the Sample Flow

Clone or download the repository to get the sample flow:

git clone
cd visualtj

Copy the content of the flow.json file to clipboard. Go to http://localhost:1880 and import the flow using the import function. In the upper right corner click on the menu bar, then Import > Clipboard to import the flow:

Paste the sample flow into the Paste nodes here field and click Import.

Update your Bluemix Credentials

In this step, you get API access to the Watson services used in this recipe:

  • Watson Visual Recognition Service
  • Watson Text to Speech Service

(If you don't have a Bluemix account, follow the instructions to create a free trial account.)

At first you have to create a Visual Recognition instance on Bluemix:

You can leave the default values and select Create. Now go to Sevice Credentials on the left menu and copy your api_key into clipboard.

Then you need to update the Visual Recognition node within your flow with your Watson Visual Recognition credentials:

The last step is the Watson Text to Speech service. You need to do the exact same thing you did with the Visual Recognition service. You may leave all the default values and select Create. Copy your credentials and add them to Text to Speech node .

Run the Application

Finally click the red Deploy button in the upper right corner to deploy your flow. Now you can access the application at http://localhost:1880/visualtj and start taking a picture and analyze it with the IBM Watson Visual Recognition Service.

Whats Next?

There are a few things you can do and ways to take your robot forward:

  • Create a custom classifier and train the visual recognition service to improve its classification capabilites. This tutorial and the documentation will help you creating your own custom classifier.

Contributing and Issues

To contribute just fork the repository and send in a pull request. If you find any issues, feel free to open up a github issue.

Dependencies List


MIT License

You can’t perform that action at this time.