Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[chaplus_ros]support mebo #374

Merged
merged 4 commits into from Aug 19, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
80 changes: 71 additions & 9 deletions chaplus_ros/README.md
@@ -1,18 +1,67 @@
chaplus_ros
===========

ROS package for https://www.chaplus.jp/
ROS package for mebo: https://mebo.work/

## Tutorials

1) Obtain API keys for chaplus service, go to https://www.chaplus.jp/api

You can also create account via https://chaplus.work and reqeust [beta program](https://forms.gle/DQWXdXzUH4MnE5wv6)

2) Put API key as json file under `` `rospack find chaplus_ros`/apikey.json ``
```
{"apikey": "0123456789"}
```
1. optional) If you want to get new agent, please reference https://qiita.com/maKunugi/items/14f1b82a2c0b6fa5c202

2. For JSK Users, please download the [apikey.json](https://drive.google.com/file/d/1tAT_WQqCMqvtbM0-CSTomWjwMP4jcOi9/view?usp=sharing)

3. Please start the sample launch with the following command.

```
$ roslaunch chaplus_ros google_example.launch chaplus_apikey_file:=${HOME}/Downloads/apikey.json
```
This sample subscribes `/speech_to_text [speech_recognition_msgs/SpeechRecognitionCandidates]` and publishes `/robotsound_jp [sound_play/SoundRequest]`

4. You can try several conversations using `rostopic pub` command. Here is an example of sending "おはよう".

```
$ rostopic pub -1 /speech_to_text speech_recognition_msgs/SpeechRecognitionCandidates "transcript:
- 'おはよう'
confidence:
- 0"
```
The reply from the bot can be checked using `rostopic echo` command.
```
$ roopic echo --filter "print(m.arg)" /robotsound_jp
or
$ rostopic echo /robotsound_jp | ascii2uni -a U -q
```

Here is an example of conversations.
```
# terminal 1
$ rostopic pub -1 /speech_to_text speech_recognition_msgs/SpeechRecognitionCandidates "transcript:
- 'おはよう'
confidence:
- 0"

$ rostopic pub -1 /speech_to_text speech_recognition_msgs/SpeechRecognitionCandidates "transcript:
- 'お話しよう'
confidence:
- 0"

$ rostopic pub -1 /speech_to_text speech_recognition_msgs/SpeechRecognitionCandidates "transcript:
- '何色が好き?'
confidence:
- 0"

$ rostopic pub -1 /speech_to_text speech_recognition_msgs/SpeechRecognitionCandidates "transcript:
- '良いですね。私は白が好きです。'
confidence:
- 0"
```
```
# termial 2
$ roopic echo --filter "print(m.arg)" /robotsound_jp
おはようございます!
あー、久しぶりにおしゃべりしたいですね。楽しみにしています。
私の好きな色は緑です。気持ちが落ち着きますよね。
私も好きです。気持ちが落ち着く気がしますから。
```

## Interface

Expand Down Expand Up @@ -78,3 +127,16 @@ Please access https://a3rt.recruit.co.jp/product/talkAPI/registered/ and enter y
```
roslaunch chaplus_ros google_example.launch chatbot_engine:=A3RT
```

### Chaplus

Reference: https://www.chaplus.jp/. Note that support ends on August 31, 2022.

1) Obtain API keys for chaplus service, go to https://www.chaplus.jp/api

You can also create account via https://chaplus.work and reqeust [beta program](https://forms.gle/DQWXdXzUH4MnE5wv6)

2) Put API key as json file under `` `rospack find chaplus_ros`/apikey.json ``
```
{"apikey": "0123456789"}
```
5 changes: 4 additions & 1 deletion chaplus_ros/apikey.json
@@ -1,2 +1,5 @@
{"apikey": "0123456789",
"apikey_a3rt": "abcdefgh"}
"apikey_a3rt": "abcdefgh",
"apikey_mebo": "apikey",
"agentid_mebo": "agentid",
"uid_mebo": "user"}
80 changes: 46 additions & 34 deletions chaplus_ros/sample/google_example.launch
@@ -1,39 +1,50 @@
<launch>

<!-- you can choose chatbot_engine "Chaplus" or "A3RT" currently-->
<arg name="chatbot_engine" default="Chaplus" />

<arg name="use_sample" default="True" />

<!-- start talker
subscribe /robotsound_jp sound_play/SoundRequest -->
<include file="$(find aques_talk)/launch/aques_talk.launch" />

<!-- start listener
publishes /Tablet/voice speech_recognition_msgs/SpeechRecognitionCandidates -->
<include file="$(find ros_speech_recognition)/launch/speech_recognition.launch" >
<arg name="launch_sound_play" value="false" />
<arg name="continuous" value="true" />
<arg name="engine" default="GoogleCloud" />
<arg name="language" value="ja" />
<arg name="n_channel" value="2" />
<arg name="depth" value="16" />
<arg name="sample_rate" value="44100" />
<arg name="device" value="hw:0,0" />
</include>
<param name="/speech_recognition/google_cloud_credentials_json"
value="/home/a-fujii/Downloads/eternal-byte-236613-4bc6962824d1.json" />
<!--
<param name="/speech_recognition/diarizationConfig"
type="yaml"
value="{'enableSpeakerDiarization': True, 'maxSpeakerCount': 3}" />
-->

<!-- node to convert /Tablet/voice (speech_recognition_msgs/SpeechRecognitionCandidates) to /request (std_msgs/String)
c.f.: https://github.com/ros/ros_comm/pull/639#issuecomment-618750038 -->
<node pkg="topic_tools" type="relay_field" name="sound_request_to_request"
args="--wait-for-start /Tablet/voice /request std_msgs/String
'data: m.transcript[0]'" />
<!-- you can choose chatbot_engine "Chaplus" or "A3RT" or "Mebo" currently-->
<arg name="chatbot_engine" default="Mebo" />

<arg name="use_sample" default="false" />
<arg name="use_respeaker" default="true" doc="set false if you do not use respeaker"/>
<arg name="chaplus_apikey_file" default="$(find chaplus_ros)/apikey.json" />

<group if="$(arg use_respeaker)">
<!-- node to convert /speech_to_text (speech_recognition_msgs/SpeechRecognitionCandidates) to /request (std_msgs/String) -->
<node pkg="topic_tools" type="relay_field" name="sound_request_to_request"
args="--wait-for-start /speech_to_text /request std_msgs/String
'data: m.transcript[0]'" />
</group>

<group unless="$(arg use_respeaker)">
<!-- start talker
subscribe /robotsound_jp sound_play/SoundRequest -->
<include file="$(find aques_talk)/launch/aques_talk.launch" />

<!-- start listener
publishes /Tablet/voice speech_recognition_msgs/SpeechRecognitionCandidates -->
<include file="$(find ros_speech_recognition)/launch/speech_recognition.launch" >
<arg name="launch_sound_play" value="false" />
<arg name="continuous" value="true" />
<arg name="engine" default="GoogleCloud" />
<arg name="language" value="ja" />
<arg name="n_channel" value="2" />
<arg name="depth" value="16" />
<arg name="sample_rate" value="44100" />
<arg name="device" value="hw:0,0" />
</include>
<param name="/speech_recognition/google_cloud_credentials_json"
value="/home/a-fujii/Downloads/eternal-byte-236613-4bc6962824d1.json" />
<!--
<param name="/speech_recognition/diarizationConfig"
type="yaml"
value="{'enableSpeakerDiarization': True, 'maxSpeakerCount': 3}" />
-->

<!-- node to convert /Tablet/voice (speech_recognition_msgs/SpeechRecognitionCandidates) to /request (std_msgs/String)
c.f.: https://github.com/ros/ros_comm/pull/639#issuecomment-618750038 -->
<node pkg="topic_tools" type="relay_field" name="sound_request_to_request"
args="--wait-for-start /Tablet/voice /request std_msgs/String
'data: m.transcript[0]'" />
</group>

<!-- node to convert /response (std_msgs/String) to /robotsound_jp (sound_play/SoundRequest) -->
<node pkg="topic_tools" type="relay_field" name="string_to_sound_request"
Expand All @@ -47,6 +58,7 @@
<rosparam subst_value="true">
chatbot_engine: $(arg chatbot_engine)
use_sample: $(arg use_sample)
chaplus_apikey_file: $(arg chaplus_apikey_file)
</rosparam>
</node>

Expand Down
33 changes: 31 additions & 2 deletions chaplus_ros/scripts/chaplus_ros.py
Expand Up @@ -52,7 +52,7 @@ class ChaplusROS(object):

def __init__(self):

self.chatbot_engine = rospy.get_param("~chatbot_engine", "Chaplus")
self.chatbot_engine = rospy.get_param("~chatbot_engine", "Mebo")
self.use_sample = rospy.get_param("~use_sample", True)
# please write your apikey to chaplus_ros/apikey.json
r = rospkg.RosPack()
Expand Down Expand Up @@ -87,6 +87,13 @@ def __init__(self):
self.apikey = apikey_json['apikey_a3rt']
self.endpoint = "https://api.a3rt.recruit.co.jp/talk/v1/smalltalk"

elif self.chatbot_engine=="Mebo":
self.headers = {'content-type': 'application/json'}
self.url = "https://api-mebo.dev/api"
self.apikey = apikey_json['apikey_mebo']
self.agentid = apikey_json['agentid_mebo']
self.uid = apikey_json['uid_mebo']

else:
rospy.logerr("please use chatbot_engine Chaplus or A3RT")
sys.exit(1)
Expand Down Expand Up @@ -141,8 +148,30 @@ def topic_cb(self, msg):
best_response = "ごめんなさい、よくわからないです"
rospy.loginfo("a3rt: returns best response {}".format(best_response))

#use Mebo
elif self.chatbot_engine == "Mebo":
try:
rospy.loginfo("received {}".format(msg.data))
self.data = json.dumps(
{'api_key': self.apikey,
'agent_id': self.agentid,
'utterance': msg.data,
'uid': self.uid
})
response = requests.post(self.url, headers=self.headers, data=self.data, timeout=(3.0, 7.5))
response_json = response.json()
if 'bestResponse' not in response_json:
best_response = "ごめんなさい、よくわからないです"
else:
best_response = response_json['bestResponse']['utterance']
except Exception as e:
rospy.logerr("Failed to reqeust url={}, headers={}, data={}".format(
self.url, self.headers, self.data))
rospy.logerr(e)
best_response = "ごめんなさい、よくわからないです"
rospy.loginfo("mebo: returns best response {}".format(best_response))
else:
rospy.logerr("please use chatbot_engine Chaplus or A3RT")
rospy.logerr("please use chatbot_engine Chaplus or A3RT or Mebo")

if response_json is not None:
# convert to string for print out
Expand Down