No description, website, or topics provided.
Branch: master
Clone or download
Latest commit 997306f Dec 5, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
2016-12-built-in-intents update READMEs Mar 16, 2018
2017-06-custom-intent-engines Create README.md Jun 15, 2017
2018-01-Braun-et-al-extension Fix typo Dec 5, 2018
.gitignore last version of data Jun 5, 2017
LICENSE Create LICENSE Jul 25, 2017
README.md Fix display Dec 4, 2018

README.md

Natural Language Understanding benchmark

This repository contains the results of three benchmarks that compare natural language understanding services offering:

  1. built-in intents (Apple’s SiriKit, Amazon’s Alexa, Microsoft’s Luis, Google’s API.ai, and Snips.ai) on a selection of various intents. This benchmark was performed in December 2016. Its results are described in length in the following post.
  2. custom intent engines (Google's API.ai, Facebook's Wit, Microsoft's Luis, Amazon's Alexa, and Snips' NLU) for seven chosen intents. This benchmark was performed in June 2017. Its results are described in a paper and a blog post.
  3. extension of Braun et al., 2017 (Google's API.AI, Microsoft's Luis, IBM's Watson, Rasa) This experiment replicates the analysis made by Braun et al., 2017, published in Evaluating Natural Language Understanding Services for Conversational Question Answering Systems as part of SIGDIAL 2017 proceedings. Snips and Rasa are added. Details are available in a paper and a blog post.

The data is provided for each benchmark and more details about the methods are available in the README file in each folder.

Any publication based on these datasets must include a full citation to the following paper in which the results were published by Snips:

"Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces"