Skip to content

Commit

Permalink
Merge pull request #5416 from RasaHQ/rasa-init-tests
Browse files Browse the repository at this point in the history
`rasa init` should include tests by default
  • Loading branch information
alwx committed Mar 19, 2020
2 parents f3a3fb7 + e3cfad6 commit 82e5298
Show file tree
Hide file tree
Showing 9 changed files with 339 additions and 276 deletions.
1 change: 1 addition & 0 deletions changelog/5416.improvement.rst
@@ -0,0 +1 @@
Change ``rasa init`` to include ``tests/conversation_tests.md`` file by default.
71 changes: 34 additions & 37 deletions docs/user-guide/evaluating-models.rst
Expand Up @@ -16,6 +16,40 @@ Evaluating Models
If you are looking to tune the hyperparameters of your NLU model,
check out this `tutorial <https://blog.rasa.com/rasa-nlu-in-depth-part-3-hyperparameters/>`_.

.. _end_to_end_evaluation:

End-to-End Evaluation
---------------------

Rasa Open Source lets you evaluate dialogues end-to-end, running through
test conversations and making sure that both NLU and Core make correct predictions.

To do this, you need some stories in the end-to-end format,
which includes both the NLU output and the original text.
Here is an example:

.. code-block:: story
## end-to-end story 1
* greet: hello
- utter_ask_howcanhelp
* inform: show me [chinese](cuisine) restaurants
- utter_ask_location
* inform: in [Paris](location)
- utter_ask_price
By default Rasa saves tests to ``tests/conversation_tests.md``. You can evaluate your model
against them by running:

.. code-block:: bash
$ rasa test
.. note::

Make sure your model file in ``models`` is a combined ``core``
and ``nlu`` model. If it does not contain an NLU model, Core will use
the default ``RegexInterpreter``.

.. _nlu-evaluation:

Expand Down Expand Up @@ -227,40 +261,3 @@ you.
.. note::
This training process can take a long time, so we'd suggest letting it run
somewhere in the background where it can't be interrupted.


.. _end_to_end_evaluation:

End-to-End Evaluation
---------------------

Rasa lets you evaluate dialogues end-to-end, running through
test conversations and making sure that both NLU and Core make correct predictions.

To do this, you need some stories in the end-to-end format,
which includes both the NLU output and the original text.
Here is an example:

.. code-block:: story
## end-to-end story 1
* greet: hello
- utter_ask_howcanhelp
* inform: show me [chinese](cuisine) restaurants
- utter_ask_location
* inform: in [Paris](location)
- utter_ask_price
If you've saved end-to-end stories as a file called ``e2e_stories.md``,
you can evaluate your model against them by running:

.. code-block:: bash
$ rasa test --stories e2e_stories.md --e2e
.. note::

Make sure your model file in ``models`` is a combined ``core``
and ``nlu`` model. If it does not contain an NLU model, Core will use
the default ``RegexInterpreter``.

0 comments on commit 82e5298

Please sign in to comment.