Basic chatbot example using the OpenSource Rasa Stack (Rasa NLU and Rasa Core). Rasa is written in Python allows developers to expand chatbots and voice assistants beyond answering simple questions by enabling state-of-the-art machine learning models your bots can hold contextual conversations with users.
This repository is based on the official Rasa quicktart guide, while also introduces the use of Ngrok to perform local tests of your chatbot interaction with Facebook messenger.
- Python 3.6
- pip
- conda
- Ngrok (Website)
- Rasa Core and Rasa NLU. Follow official installation instructions. Altenatively you can just run the following on a terminal.
~ $ conda create --name rasa --file environment.yml
~ $ source activate rasa
(rasa) $ pip install -r requirements.txt
~ $
Follow the instructions below to train and test the chatbot. Requires configuration files to setup your bot are:
Rasa Core
-
stories.md: Rasa Core works by learning from example conversations. A story starts with
##
followed by a name (optional). Lines that start with*
are messages sent by the user. Although you don’t write the actual message, but rather the intent (and the entities) that represent what the user means. Lines that start with - are actions taken by your bot. In general an action can do anything, including calling an API and interacting with the outside world. Find more info about the format here. -
domain.yml: The domain defines the universe your bot lives in (see Domain format):
Conf | Description |
---|---|
actions | things your bot can do and say |
templates | template strings for the things your bot can say |
intents | things you expect users to say |
entities | pieces of info you want to extract from messages |
slots | information to keep track of during a conversation |
Rasa NLU
- nlu.md: define the user messages the bot should be able to handle. It is suggested to define at least five examples of utterances.
- nlu_config.yml: NLU parameters configuration
(rasa) $ python -m rasa_core.train -d domain.yml -s stories.md -o models/dialogue --endpoints endpoints.yml --epochs 100 --history 3
Here, you can specify the --nlu-threshold
, so the fallback action or the utter_default will be executed if the intent recognition has a confidence below a threshold (0-1).
(rasa) $ python -m rasa_nlu.train -c nlu_config.yml --data nlu.md -o models --fixed_model_name nlu --project current --verbose
Story graph shows an overview of the conversational paths defined in stories.md. To re-generate the graph see documentation.
(rasa) $ python -m rasa_core_sdk.endpoint --actions actions
(rasa) $ python -m rasa_core.run -d models/dialogue -u models/current/nlu
- First of all, you need to set up a Facebook app and a page. To create the app go to: https://developers.facebook.com/ and click on “Add a new app”.
- Go to the Basic Settings of the app and copy the ${APP_SECRET} value.
- Go onto the dashboard for the app and under Products, click Add Product and add Messenger.
- Under the settings for Messenger, scroll down to Token Generation and click on the link to create a new page for your app.
- Copy the generated
${ACCESS_TOKEN}
. - On a separate terminal, start a ngrok tunnel with this command:
~ $ ngrok http 5055
- Copy the public
${URL}
from the ngrok dashboard - Go onto the dashboard for the app and under Products, click Add Product and add Webhooks.
- Under Webhooks settings, select
Page
and then click onSubscribe to this object
- Set the Callback URL &
${VERIFY_TOKEN}
properties to${URL}/webhooks/facebook/webhook
andmy-rasa-bot
respectively. - Configure the fb_credentials.yml file with properties
${VERIFY_TOKEN}
,${APP_SECRET}
AND${ACCESS_TOKEN}
you collected. - Deploy your chatbot locally by running the following command with the
--credentials
parameter (don't forget to deploy your custom actions):
(rasa) $ python -m rasa_core.run -d models/dialogue -u models/current/nlu --port 5002 --connector facebook --credentials credentials.yml
- Now you can test your chatbot by sending messages to your Page through FB Messenger.
Interactive Learning is a powerful way to explore what your bot can do, and the easiest way to fix any mistakes it makes, while covering different possible scenarios that were not taken into account when defining your chatbot domain & stories.
Interactive learning can be started using the following command:
(rasa) $ python -m rasa_core.train -d domain.yml -s stories.md -o models/dialogue --endpoints endpoints.yml --online
It will enable an interactive prompt where you can train different intents and scenarios. However, current rasa_core version 0.11.1, just allows you to export the last conversation context per interactive session. An enhancement issue has been opened here, so in a near future you wiil be able to export multiple conversation contexts you used during interactive learning.
Another option is to To train your bot in interactive mode during runtime. To do so, you need to start your bot using the --enable-api
parameter as follows;
(rasa) $ python -m rasa_core.run -d models/dialogue -u models/current/nlu --endpoints endpoints.yml --debug --enable_api
It will enable the following resources:
Action | Method | Resource |
---|---|---|
hello | GET+OPTIONS+HEAD | / |
list_trackers | GET+OPTIONS+HEAD | /conversations |
execute_action | POST+OPTIONS | /conversations/[sender_id]/execute |
log_message | POST+OPTIONS | /conversations/[sender_id]/messages |
predict | POST+OPTIONS | /conversations/[sender_id]/predict |
respond | GET+OPTIONS+POST+HEAD | /conversations/[sender_id]/respond |
retrieve_tracker | GET+OPTIONS+HEAD | /conversations/[sender_id]/tracker |
replace_events | PUT+OPTIONS | /conversations/[sender_id]/tracker/events |
get_domain | GET+OPTIONS+HEAD | /domain |
continue_training | POST+OPTIONS | /finetune |
load_model | POST+OPTIONS | /model |
tracker_predict | POST+OPTIONS | /predict |
static | GET+OPTIONS+HEAD | /static/[filename] |
status | GET+OPTIONS+HEAD | /status |
version | GET+OPTIONS+HEAD | /version |
custom_webhook.health | GET+OPTIONS+HEAD | /webhooks/rest/ |
custom_webhook.receive | POST+OPTIONS | /webhooks/rest/webhook |
Licensed under the Apache License, Version 2.0. See LICENSE