Skip to content
Snips Python library to extract meaning from text
Python
Branch: master
Clone or download
adrienball Allow to fit SnipsNLUEngine with Dataset object (#840)
* Allow to fit SnipsNLUEngine with Dataset object

* Fix linting annotations

* Update Changelog
Latest commit 829d513 Aug 9, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.github Update template after review Aug 16, 2018
.img Update README Oct 5, 2018
debug Fix linting Jan 17, 2019
docs Improve documentation (#839) Aug 8, 2019
sample_datasets Fix sample datasets Mar 6, 2019
snips_nlu Allow to fit SnipsNLUEngine with Dataset object (#840) Aug 9, 2019
snips_nlu_samples Fix loading and persisting of resources Jan 17, 2019
tools Refactor Travis deployment in pure tox Feb 26, 2018
.appveyor.yml Use master as the default branch Jul 16, 2019
.codecov.yml Fix codecov.yml Mar 29, 2018
.coveragerc Refactor Travis deployment in pure tox Feb 26, 2018
.gitignore Add support and CI for python3.7 Feb 1, 2019
.readthedocs.yml Fix small issue in API ref for language resources Feb 28, 2018
.travis.yml Update dependencies (#811) Jun 20, 2019
AUTHORS.rst Replace twitter with github account in AUTHORS file Mar 27, 2018
CHANGELOG.md Allow to fit SnipsNLUEngine with Dataset object (#840) Aug 9, 2019
CODE_OF_CONDUCT.md Create CODE_OF_CONDUCT.md Mar 6, 2018
CONTRIBUTING.rst Use master as the default branch Jul 16, 2019
CONTRIBUTORS.rst Add contributor Apr 23, 2019
LICENSE Update LICENSE Mar 27, 2018
MANIFEST.in Fix MANIFEST.in Jun 21, 2018
README.rst Use master as the default branch Jul 16, 2019
linting_test.py Include tests in linting tests Oct 2, 2018
setup.py Invalidate importlib caches after dynamically installing module (#838) Aug 8, 2019
tox.ini Update tox script Apr 10, 2019

README.rst

Snips NLU

https://travis-ci.org/snipsco/snips-nlu.svg?branch=master https://ci.appveyor.com/api/projects/status/github/snipsco/snips-nlu?branch=master&svg=true https://img.shields.io/pypi/v/snips-nlu.svg?branch=master https://img.shields.io/pypi/pyversions/snips-nlu.svg?branch=master https://img.shields.io/twitter/url/http/shields.io.svg?style=social

Snips NLU (Natural Language Understanding) is a Python library that allows to parse sentences written in natural language and extracts structured information.

Summary

What is Snips NLU about ?

Behind every chatbot and voice assistant lies a common piece of technology: Natural Language Understanding (NLU). Anytime a user interacts with an AI using natural language, their words need to be translated into a machine-readable description of what they meant.

The NLU engine first detects what the intention of the user is (a.k.a. intent), then extracts the parameters (called slots) of the query. The developer can then use this to determine the appropriate action or response.

Let’s take an example to illustrate this, and consider the following sentence:

"What will be the weather in paris at 9pm?"

Properly trained, the Snips NLU engine will be able to extract structured data such as:

{
   "intent": {
      "intentName": "searchWeatherForecast",
      "probability": 0.95
   },
   "slots": [
      {
         "value": "paris",
         "entity": "locality",
         "slotName": "forecast_locality"
      },
      {
         "value": {
            "kind": "InstantTime",
            "value": "2018-02-08 20:00:00 +00:00"
         },
         "entity": "snips/datetime",
         "slotName": "forecast_start_datetime"
      }
   ]
}

In this case, the identified intent is searchWeatherForecast and two slots were extracted, a locality and a datetime. As you can see, Snips NLU does an extra step on top of extracting entities: it resolves them. The extracted datetime value has indeed been converted into a handy ISO format.

Check out our blog post to get more details about why we built Snips NLU and how it works under the hood. We also published a paper on arxiv, presenting the machine learning architecture of the Snips Voice Platform.

Getting Started

System requirements

  • Python 2.7 or Python >= 3.5
  • RAM: Snips NLU will typically use between 100MB and 200MB of RAM, depending on the language and the size of the dataset.

Installation

pip install snips-nlu

We currently have pre-built binaries (wheels) for snips-nlu and its dependencies for MacOS (10.11 and later), Linux x86_64 and Windows.

For any other architecture/os snips-nlu can be installed from the source distribution. To do so, Rust and setuptools_rust must be installed before running the pip install snips-nlu command.

Language resources

Snips NLU relies on external language resources that must be downloaded before the library can be used. You can fetch resources for a specific language by running the following command:

python -m snips_nlu download en

Or simply:

snips-nlu download en

The list of supported languages is available at this address.

API Usage

Command Line Interface

The easiest way to test the abilities of this library is through the command line interface.

First, start by training the NLU with one of the sample datasets:

snips-nlu train path/to/dataset.json path/to/output_trained_engine

Where path/to/dataset.json is the path to the dataset which will be used during training, and path/to/output_trained_engine is the location where the trained engine should be persisted once the training is done.

After that, you can start parsing sentences interactively by running:

snips-nlu parse path/to/trained_engine

Where path/to/trained_engine corresponds to the location where you have stored the trained engine during the previous step.

Sample code

Here is a sample code that you can run on your machine after having installed snips-nlu, fetched the english resources and downloaded one of the sample datasets:

>>> from __future__ import unicode_literals, print_function
>>> import io
>>> import json
>>> from snips_nlu import SnipsNLUEngine
>>> from snips_nlu.default_configs import CONFIG_EN
>>> with io.open("sample_datasets/lights_dataset.json") as f:
...     sample_dataset = json.load(f)
>>> nlu_engine = SnipsNLUEngine(config=CONFIG_EN)
>>> nlu_engine = nlu_engine.fit(sample_dataset)
>>> text = "Please turn the light on in the kitchen"
>>> parsing = nlu_engine.parse(text)
>>> parsing["intent"]["intentName"]
'turnLightOn'

What it does is training an NLU engine on a sample weather dataset and parsing a weather query.

Sample datasets

Here is a list of some datasets that can be used to train a Snips NLU engine:

  • Lights dataset: "Turn on the lights in the kitchen", "Set the light to red in the bedroom"
  • Beverage dataset: "Prepare two cups of cappucino", "Make me a cup of tea"
  • Flights dataset: "Book me a flight to go to boston this weekend", "book me some tickets from istanbul to moscow in three days"

Benchmarks

In January 2018, we reproduced an academic benchmark which was published during the summer 2017. In this article, authors assessed the performance of API.ai (now Dialogflow, Google), Luis.ai (Microsoft), IBM Watson, and Rasa NLU. For fairness, we used an updated version of Rasa NLU and compared it to the latest version of Snips NLU (both in dark blue).

.img/benchmarks.png

In the figure above, F1 scores of both intent classification and slot filling were computed for several NLU providers, and averaged accross the three datasets used in the academic benchmark mentionned before. All the underlying results can be found here.

Documentation

To find out how to use Snips NLU please refer to the package documentation, it will provide you with a step-by-step guide on how to setup and use this library.

Citing Snips NLU

Please cite the following paper when using Snips NLU:

@article{coucke2018snips,
  title   = {Snips Voice Platform: an embedded Spoken Language Understanding system for private-by-design voice interfaces},
  author  = {Coucke, Alice and Saade, Alaa and Ball, Adrien and Bluche, Th{\'e}odore and Caulier, Alexandre and Leroy, David and Doumouro, Cl{\'e}ment and Gisselbrecht, Thibault and Caltagirone, Francesco and Lavril, Thibaut and others},
  journal = {arXiv preprint arXiv:1805.10190},
  pages   = {12--16},
  year    = {2018}
}

FAQ & Community

Please join the forum to ask your questions and get feedback from the community.

Related content

How do I contribute ?

Please see the Contribution Guidelines.

Licence

This library is provided by Snips as Open Source software. See LICENSE for more information.

Geonames Licence

The snips/city, snips/country and snips/region builtin entities rely on software from Geonames, which is made available under a Creative Commons Attribution 4.0 license international. For the license and warranties for Geonames please refer to: https://creativecommons.org/licenses/by/4.0/legalcode.

You can’t perform that action at this time.