- desc
Command line interface for open source chatbot framework Rasa. Learn how to train, test and run your machine learning-based conversational AI assistants
The command line interface (CLI) gives you easy-to-remember commands for common tasks.
Command | Effect |
---|---|
rasa init |
Creates a new project with example training data, actions, and config files. |
rasa train |
Trains a model using your NLU data and stories, saves trained model in ./models . |
rasa interactive |
Starts an interactive learning session to create new training data by chatting. |
rasa shell |
Loads your trained model and lets you talk to your assistant on the command line. |
rasa run |
Starts a Rasa server with your trained model. See the configuring-http-api docs for details. |
rasa run actions |
Starts an action server using the Rasa SDK. |
rasa visualize |
Visualizes stories. |
rasa test |
Tests a trained Rasa model using your test NLU data and stories. |
rasa data split nlu |
Performs a split of your NLU data according to the specified percentages. |
rasa data convert nlu |
Converts NLU training data between different formats. |
rasa export |
Export conversations from a tracker store to an event broker. |
rasa x |
Launch Rasa X locally. |
rasa -h |
Shows all available commands. |
A single command sets up a complete project for you with some example training data.
rasa init
This creates the following files:
.
├── __init__.py
├── actions.py
├── config.yml
├── credentials.yml
├── data
│ ├── nlu.md
│ └── stories.md
├── domain.yml
├── endpoints.yml
└── models
└── <timestamp>.tar.gz
The rasa init
command will ask you if you want to train an initial model using this data. If you answer no, the models
directory will be empty.
With this project setup, common commands are very easy to remember. To train a model, type rasa train
, to talk to your model on the command line, rasa shell
, to test your model type rasa test
.
The main command is:
rasa train
This command trains a Rasa model that combines a Rasa NLU and a Rasa Core model. If you only want to train an NLU or a Core model, you can run rasa train nlu
or rasa train core
. However, Rasa will automatically skip training Core or NLU if the training data and config haven't changed.
rasa train
will store the trained model in the directory defined by --out
. The name of the model is per default <timestamp>.tar.gz
. If you want to name your model differently, you can specify the name using --fixed-model-name
.
The following arguments can be used to configure the training process:
rasa train --help
Note
Make sure training data for Core and NLU are present when training a model using rasa train
. If training data for only one model type is present, the command automatically falls back to rasa train nlu
or rasa train core
depending on the provided training files.
To start an interactive learning session with your assistant, run
rasa interactive
If you provide a trained model using the --model
argument, the interactive learning process is started with the provided model. If no model is specified, rasa interactive
will train a new Rasa model with the data located in data/
if no other directory was passed to the --data
flag. After training the initial model, the interactive learning session starts. Training will be skipped if the training data and config haven't changed.
The full list of arguments that can be set for rasa interactive
is:
rasa interactive --help
To start a chat session with your assistant on the command line, run:
rasa shell
The model that should be used to interact with your bot can be specified by --model
. If you start the shell with an NLU-only model, rasa shell
allows you to obtain the intent and entities of any text you type on the command line. If your model includes a trained Core model, you can chat with your bot and see what the bot predicts as a next action. If you have trained a combined Rasa model but nevertheless want to see what your model extracts as intents and entities from text, you can use the command rasa shell nlu
.
To increase the logging level for debugging, run:
rasa shell --debug
The full list of options for rasa shell
is
rasa shell --help
To start a server running your Rasa model, run:
rasa run
The following arguments can be used to configure your Rasa server:
rasa run --help
For more information on the additional parameters, see configuring-http-api
. See the Rasa http-api
docs for detailed documentation of all the endpoints.
To run your action server run
rasa run actions
The following arguments can be used to adapt the server settings:
rasa run actions --help
To open a browser tab with a graph showing your stories:
rasa visualize
Normally, training stories in the directory data
are visualized. If your stories are located somewhere else, you can specify their location with --stories
.
Additional arguments are:
rasa visualize --help
To evaluate your model on test data, run:
rasa test
Specify the model to test using --model
. Check out more details in nlu-evaluation
and core-evaluation
.
The following arguments are available for rasa test
:
rasa test --help
To create a split of your NLU data, run:
rasa data split nlu
You can specify the training data, the fraction, and the output directory using the following arguments:
rasa data split nlu --help
This command will attempt to keep the proportions of intents the same in train and test. If you have NLG data for retrieval actions, this will be saved to seperate files:
ls train_test_split
nlg_test_data.md test_data.json
nlg_training_data.md training_data.json
To convert NLU data from LUIS data format, WIT data format, Dialogflow data format, JSON, or Markdown to JSON or Markdown, run:
rasa data convert nlu
You can specify the input file, output file, and the output format with the following arguments:
rasa data convert nlu --help
To export events from a tracker store using an event broker, run:
rasa export
You can specify the location of the environments file, the minimum and maximum timestamps of events that should be published, as well as the conversation IDs that should be published.
rasa export --help
You can start Rasa X locally by executing
rasa x
Note
By default Rasa X runs on the port 5002. Using the argument --rasa-x-port
allows you to change it to any other port.
The following arguments are available for rasa x
:
rasa x --help