tamashin edited this page May 8, 2017 · 18 revisions

About the project

This project provides a step-by-step walkthrough to help you build a hands-free Alexa Voice Service (AVS) prototype in 60 minutes, using wake word engines from Sensory or KITT.AI. Now, in addition to pushing a button to "start listening", you can now also just say the wake word "Alexa", much like the Amazon Echo. You can find step-by-step instructions to set up the hands-free prototype with the Conexant AudioSmart 4-Mic Development Kit for Amazon AVS RPi2 + Conexant 4-Mic.

NEW! - Click here for instructions to build the AVS Prototype using a Raspberry Pi and the Conexant 4-Mic Development Kit for Amazon AVS

Click here for instructions to build the AVS Prototype using a Raspberry Pi and the Conexant 2-Mic Development Kit for Amazon AVS


What is AVS?

Alexa Voice Service (AVS) is Amazon’s intelligent voice recognition and natural language understanding service that allows you as a developer to voice-enable any connected device that has a microphone and speaker.


What's new?

May 4th, 2017:

Updates

  • Conexant 4-Mic Development Kit is now Available. Click here
  • The companion service persists refresh tokens between restarts. This means you won't have to authenticate each time you bring up the sample app. Read about the update on the Alexa Blog ».
  • The Listen button has been replaced with a microphone icon.
  • The sample app uses new Alexa wake word models from KITT.ai.

Maintenance

  • ALPN version has been updated in POM.xml.
  • Automated install no longer requires user intervention to update certificates.

Bug Fixes

  • The sample app ensures that the downchannel stream is established before sending the initial SynchronizeState event. This adheres to the guidance provided in Managing an HTTP/2 Connection with AVS.
  • Locale strings in the sample app user interface have been updated to match the values in config.json.
  • Fixed no volume in Linux bug.
  • WiringPi is now installed as part of automated_install.sh.
  • Fixed 100% CPU bug.

Known Issues


Important considerations

  • Review the AVS Terms & Agreements.

  • The earcons associated with the sample project are for prototyping purposes only. For implementation and design guidance for commercial products, please see Designing for AVS and AVS UX Guidelines.

  • Usage of Sensory & KITT.AI wake word engines: The wake word engines included with this project (Sensory and KITT.AI) are intended to be used for prototyping purposes only. If you are building a commercial product with either solution, please use the contact information below to enquire about commercial licensing -

  • IMPORTANT: The Sensory wake word engine included with this project is time-limited: code linked against it will stop working when the library expires. The library included in this repository will, at all times, have an expiration date that is at least 120 days in the future. See Sensory's GitHub page for more information on how to renew the license for non-commercial use.


Get started

You can set up this project on the following platforms. Please choose the platform you'd like to set this up on -


Contribute

  • Want to report a bug or request an update to the documentation? See CONTRIBUTING.md.
  • Have questions or need help building the sample app? Open a new issue.
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.
Press h to open a hovercard with more details.