Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WakeupListener and TtsListener clarification #24

Closed
socialRoboto opened this issue Nov 16, 2019 · 4 comments
Closed

WakeupListener and TtsListener clarification #24

socialRoboto opened this issue Nov 16, 2019 · 4 comments

Comments

@socialRoboto
Copy link

I was wondering if there could be more clarification for the WakupListener and TtsListener objects. I'm trying to get temi to perform an action after hearing a phrase (e.g. I say "Tell me more" and temi responds "I would love to say more!"). I assume the wakeup word is similar to saying "Hey Temi" and the Ttslistener is what is activated after saying "Hey Temi" for out of the box software but I guess I'm confused about how to actually set these words and how to format the listeners properly so that Temi can perform actions accordingly (mainly following java documentation on oracle.com).

temiTroubleShooting

@ramisaban1
Copy link
Contributor

Hello,

Thank you for your concern, yes we do have a current solution for programming temi to react to specific speech. You can use Local NLP which is exactly what you are looking for.

Here is a small tutorial, please follow these steps:

You are required to add the Local NLP intents mentioned in the code within temi's Settings. This means you need to turn Local NLP ON within Settings -> temi Developer Tools -> Local NLP. After turning it on, you must add phrases and intent names for each phrase (Picture attached below):

image

Within the Application section of AndroidManifest.xml, you need to add your intent names. Take a look below:

<meta-data
 android:name="com.robotemi.sdk.metadata.ACTIONS"
 android:value="
        home.welcome,
        home.dance,
        home.sleep
        " />

MainActivity.java:

@Override
public void onNlpCompleted(NlpResult nlpResult) {
 //do something with nlp result. Base the action specified in the AndroidManifest.xml
 Toast.makeText(MainActivity.this, nlpResult.action, Toast.LENGTH_SHORT).show();

    switch (nlpResult.action) {
       case "home.welcome":
          robot.speak(Tts.create("Welcome Home", true));        //NLP Q&A Example
          robot.tiltAngle(23, 5.3F);
          break;

       case "home.dance":
          long t= System.currentTimeMillis();
          long end = t+5000;
          while(System.currentTimeMillis() < end) {
             robot.skidJoy(0F, 1F);
          }
          break;

       case "home.sleep":
          robot.goTo("home base");
          break;
    }
}

If you still have questions on how to use it please don't hesitate to reach back out.

@bruno963852
Copy link

It Will someday be possible to do this using alexa assistant?

@ramisaban1
Copy link
Contributor

Yes, we received complete certification from Alexa and are leaning towards releasing this functionality pretty soon. We currently don't have a timeline for the release just yet.

@socialRoboto
Copy link
Author

Thanks for the help! Following your example got it working perfectly! My only other question before going at it again is, is there a way to have temi listen for NLP without first having to say "Hey Temi" first (guessing setting up something up in the Manifest again)? And how do you suggest throwing exceptions while in an app so that Temi doesn't exit when an unknown phrase is heard?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants