Node.js library that encapsulates TJBot's basic capabilities: seeing, listening, speaking, shining, etc.
This library can be used to create your own recipes for TJBot.
Some of TJBot's capabilities require specific IBM Watson services. For example, "seeing" is powered by the Watson Visual Recognition service. Similarly, speaking and listening are powered by the Watson Text to Speech and Watson Speech to Text services.
To use these services, you will need to specify credentials for each of the Watson services you are interested in using.
Install the library as follows.
$ npm install --save tjbot
Note: The TJBot library was developed for use on Raspberry Pi. It may be possible to develop and test portions of this library on other Linux-based systems (e.g. Ubuntu), but this usage is not officially supported.
Instantiate the TJBot object.
const TJBot = require('tjbot');
var hardware = ['led', 'servo', 'microphone', 'speaker'];
var configuration = {
robot: {
gender: 'female'
},
listen: {
language: 'ja-JP'
},
speak: {
language: 'en-US'
}
};
var credentials = {
speech_to_text: {
username: 'xxx',
password: 'xxx'
},
text_to_speech: {
username: 'xxx',
password: 'xxx'
}
}
var tj = new TJBot(hardware, configuration, credentials);
This will configure your TJBot as a female robot having an LED, servo, microphone, and speaker, and with the Watson speech_to_text and text_to_speech services. In addition, this robot is configured to listen in Japanese and speak in English (using a female voice).
The default configuration of TJBot uses English as the main language with a male voice.
TJBot has a number of capabilities that you can use to bring him to life. Capabilities are combinations of hardware and Watson services that enable TJBot's functionality. For example, "listening" is a combination of having a speaker and the speech_to_text service. Internally, the _assertCapability() method checks to make sure your TJBot is configured with the right hardware and services before it performs an action that depends on having a capability. Thus, the method used to make TJBot listen, tj.listen(), first checks that your TJBot has been configured with a speaker and the speech_to_text service.
TJBot's capabilities are:
- Analyzing Tone, which requres the Watson Tone Analyzer service
- Conversing, which requires the Watson Conversation service
- Listening, which requires a microphone and the Watson Speech to Text service
- Seeing, which requires a camera and the Watson Visual Recognition service
- Shining, which requires an LED
- Speaking, which requires a speaker and the Watson Text to Speech service
- Translating, which requires the Watson Language Translator service
- Waving, which requires a servo motor
The full list of capabilities can be accessed programatically via TJBot.prototype.capabilities, the full list of hardware components can be accessed programatically via TJBot.prototype.hardware, and the full list of Watson services can be accessed programatically via TJBot.prototype.services.
The TJBot constructor takes three arguments: the list of hardware present in the robot, the configuration of the robot, and the set of Watson credentials.
function TJBot(hardware, configuration, credentials)
Valid options for hardware are defined in TJBot.prototype.hardware: camera, led, microphone, servo, and speaker.
The credentials object expects credentials to be defined for each Watson service needed by your application. Valid Watson services are defined in TJBot.prototype.services: conversation, language_translator, speech_to_text, text_to_speech, tone_analyzer, and visual_recognition.
Please see TJBot.prototype._createServiceAPI() to understand what kind of credentials are required for each specific service. Most services expect a username and password, although some (e.g. visual_recognition) expect an API key.
Example credentials object:
var credentials = {
conversation: {
username: 'xxx',
password: 'yyy'
},
language_translator: {
username: 'xxx',
password: 'yyy'
},
speech_to_text: {
username: 'xxx',
password: 'yyy'
},
text_to_speech: {
username: 'xxx',
password: 'yyy'
},
tone_analyzer: {
username: 'xxx',
password: 'yyy'
},
visual_recognition: {
key: 'xxx'
}
};
TJBot has a number of configuration options for its hardware and behaviors. Defaults are given in TJBot.prototype.defaultConfiguration, and these are overridden by any options specified in the TJBot constructor.
The most common configuration options are:
robot.name: This is the name of your TJBot! You can use this in your recipes to know when someone is speaking to your TJBot. The default name is 'TJ'.robot.gender: This is used to specify which voice is used intext_to_speech. Can either be"male"or"female".listen.language: This is used to specify the language in whichspeech_to_textlistens. SeeTJBot.prototype.languages.listenfor all available options.speak.language: This is used to specify the language in whichtext_to_speechspeaks. SeeTJBot.prototype.languages.speakfor all available options.verboseLogging: Setting this totruewill cause debug messages to be printed to the console.
Additional configuration options allow you to specify the PIN to which the servo is connected (wave.servoPin), the resolution of images captured from the camera (see.camera.*), thresholds on the confidence of object recognition for visual_recognition (see.confidenceThreshold.*), and the device ID used to access the microphone (listen.microphoneDeviceId).
A description of the public TJBot API is given below. There are a number of internal library methods that are prefixed with an underscore (_); these methods are not intended for use outside the scope of the library.
If you do need low-level access to the Watson APIs beyond the level provided by TJBot, you can access them as follows:
var tj = new TJBot(hardware, configuration, credentials);
tj._conversation; // the ConversationV1 service object
tj._languageTranslator; // the LanguageTranslatorV2 service object
tj._stt; // the SpeechToTextV1 service object
tj._tts; // the TextToSpeechV1 service object
tj._toneAnalyzer; // the ToneAnalyzerV3 service object
tj._visualRecognition; // the VisualRecognitionV3 service object
Please see the documentation for the Watson Node SDK for more details on these objects.
Sleeps for the given number of milliseconds.
msecis the number of milliseconds to sleep for.
Sleeping blocks the Node.js event loop.
Analyzes the given text for the presence of emotions.
textis the text to be analyzed
Sample usage:
tj.analyzeTone("hello world").then(function(response) {
...
});
Sample response:
response = {
"sentences_tone": [
{
"sentence_id": 0,
"text": "hello world",
"tone_categories": [
{
"tones": [
{
"score": 0.058017,
"tone_id": "anger",
"tone_name": "Anger"
},
{
"score": 0.09147,
"tone_id": "disgust",
"tone_name": "Disgust"
},
{
"score": 0.045435,
"tone_id": "fear",
"tone_name": "Fear"
},
{
"score": 0.45124,
"tone_id": "joy",
"tone_name": "Joy"
},
{
"score": 0.203841,
"tone_id": "sadness",
"tone_name": "Sadness"
}
],
"category_id": "emotion_tone",
"category_name": "Emotion Tone"
},
{
"tones": [
{
"score": 0,
"tone_id": "analytical",
"tone_name": "Analytical"
},
{
"score": 0,
"tone_id": "confident",
"tone_name": "Confident"
},
{
"score": 0,
"tone_id": "tentative",
"tone_name": "Tentative"
}
],
"category_id": "language_tone",
"category_name": "Language Tone"
},
{
"tones": [
{
"score": 0.260072,
"tone_id": "openness_big5",
"tone_name": "Openness"
},
{
"score": 0.274462,
"tone_id": "conscientiousness_big5",
"tone_name": "Conscientiousness"
},
{
"score": 0.540392,
"tone_id": "extraversion_big5",
"tone_name": "Extraversion"
},
{
"score": 0.599104,
"tone_id": "agreeableness_big5",
"tone_name": "Agreeableness"
},
{
"score": 0.278807,
"tone_id": "emotional_range_big5",
"tone_name": "Emotional Range"
}
],
"category_id": "social_tone",
"category_name": "Social Tone"
}
],
"className": "original-text--sentence_joy-low"
}
]
}
Takes a conversational turn in the Conversation service.
workspaceIdspecifies the workspace ID of the conversation in the Watson Conversation servicemessageis the text of the conversational turncallbackis called with the conversational response
Sample usage:
tj.converse(workspaceId, "hello world", function(response) {
...
});
Sample response:
response = {
"object": {conversation response object},
"description": "hello, how are you"
}
Opens the microphone and streams data to the speech_to_text service.
callbackis called with speech utterances as they are produced
Sample usage:
tj.listen(function(text) {
...
});
Sample response:
text = "hello tjbot my name is bobby"
Pauses listening.
Resumes listening.
Stops listening.
Returns a list of objects seen and their confidences.
Sample usage:
tj.see().then(function(objects) {
...
});
Sample resposne:
objects =
[
{
"class": "apple",
"score": 0.645656
},
{
"class": "fruit",
"score": 0.598688
},
{
"class": "food",
"score": 0.598688
},
{
"class": "orange",
"score": 0.5
},
{
"class": "vegetable",
"score": 0.28905
},
{
"class": "tree",
"score": 0.28905
}
]
Returns a list of text strings read by TJBot.
Sample usage:
tj.read().then(function(texts) {
...
});
Sample resposne:
TBD
Shines the LED the specified color.
colormay be specified as a name, e.g.'red'or'blue', or it may be specified as a hex string, e.g.'#FF0000'or'#0000FF'.
A full list of colors that TJBot understands can be accessed via tj.shineColors().
Sample usage:
tj.shine('orange');
tj.shine('pink');
tj.shine('#0A2C9F');
Pulses the LED the given color (e.g. fades in and out to the given color).
colorspecifies the color of the pulsedurationspecifies how long the pulse should lastdelayspecifies how long to wait in between pulses
This method returns instantly, but TJBot will continue to pulse the LED until tj.stopPulsing() is called.
Returns true if TJBot is currently pulsing the LED and false otherwise.
Stops pulsing the LED.
Returns an array of all of the colors that TJBot understands.
Selects a random color from the array returned by tj.shineColors().
Speaks the given message using text_to_speech.
messageis the message to speak
Sample usage:
tj.speak("hello world").then(function() {
return tj.speak("my name is tjbot");
}).then(function () {
return tj.speak("it's very nice to meet you!");
});
In this example, TJBot will first speak "hello world". After audio playback has finished, it will then speak "my name is tjbot". After audio playback has finished, it will then speak "it's very nice to meet you!". The Promise pattern is used here to ensure that statements can be spoken consecutively without interference.
Plays the given sound file.
soundFileis the path to the sound file to play
Sample usage:
tj.play('/usr/share/doc/Greenfoot/scenarios/lunarlander/sounds/Explosion.wav');
Causes TJBot to move its arm backward (like a wind-up for a pitch).
Note: if this method doesn't produce the expected result, the servo motor stop points may need to be overridden. Override the value of
TJBot.prototype._SERVO_ARM_BACKto find a stop point that satisfies the "back" position. Note that valid servo values are in the range [500, 2300].
Causes TJBot to raise its arm to the upward position.
Note: if this method doesn't produce the expected result, the servo motor stop points may need to be overridden. Override the value of
TJBot.prototype._SERVO_ARM_UPto find a stop point that satisfies the "back" position. Note that valid servo values are in the range [500, 2300].
Causes TJBot to lower its arm to the downward position.
Note: if this method doesn't produce the expected result, the servo motor stop points may need to be overridden. Override the value of
TJBot.prototype._SERVO_ARM_DOWNto find a stop point that satisfies the "back" position. Note that valid servo values are in the range [500, 2300].
Causes TJBot to wave the arm once (up-down-up).
We encourage you to make enhancements to this library and contribute them back to us via a pull request.
This project uses the Apache License Version 2.0 software license.