In this lab, use the see and speak nodes to train TJBot to recognize objects and speak what is seen. You will need a Raspberry Pi camera and speaker connected to the TJBot for this lab.
-
In the Node-RED editor running on the Raspberry Pi, drag an inject node () onto the canvas.
-
Double-click the node and configure it, as shown below:
-
Add a see node (), and edit it.
a. The see node has several modes: recognize text, recognize objects, and take a photo. Select See (identify objects) from the Mode drop-down list.
b. The see node uses the Watson Visual Recognition service, which requires service credentials from IBM Cloud. Click on the pencil icon to the right of the Bot drop-down list.
-
Click on the link icon next to the "Visual Recognition" heading to open the IBM Cloud console and create a Watson Visual Recognition service instance.
-
Leave the service name as is, and click Create.
-
Click Service Credentials in the menu on the left. If there are no credentials in the list, click New credential > Add to create a set of credentials. Click View Credentials to display the service credentials.
-
Copy the API key into the Visual Recognition section of the Node-RED editor.
-
Select the Camera checkbox to enable the camera.
-
The see node produces a message with names of objects and colors in the photo analyzed, with the response being passed in the
msg.payload
property. Add a function node () to loop through the results and concatenate them into a new message. -
The speak node uses the Watson Text to Speech service, which requires service credentials from IBM Cloud. Click on the pencil icon to the right of the Bot drop-down list.
-
Click on the link icon next to the "Text to Speech" heading to open the IBM Cloud console and create a Watson Text to Speech service instance.
-
Leave the service name as is, and click Create.
-
Click Service Credentials in the menu on the left. If there are no credentials in the list, click New credential > Add to create a set of credentials. Click View Credentials to display the service credentials.
-
Copy the username and password into the Text to Speech section of the Node-RED editor.
-
Determine the Speaker Device ID by running the command
aplay -l
on the Raspberry Pi. In the example output shown below, the USB speaker attached is accessible on card2
, device0
. -
In the TJBot configuration, enter the applicable speaker device ID, with the format
plughw:<card>,<device>
. -
At the top of the configuration window: a. Select English (US dialect) from the Speak drop-down list. b. Select the Speaker checkbox to enable the speaker.
-
Connect the nodes together, as shown below:
-
Click the Deploy button () in the upper right corner of the Node-RED editor to save and deploy the changes.
-
Click the tab to the left of the inject node to take a picture with TJBot's camera.
When the photo is analyzed with the Watson Visual Recognition service, a message is constructed with the objects and colors recognized, and is spoken out using the speaker.
An example is:
TJBot sees earphone, person, face, people, maroon color