Skip to content

This repository contains Contingent Plan Executor for a Declarative specification of a dialogue agent

License

Notifications You must be signed in to change notification settings

dialogue-planning/contingent-plan-executor

 
 

Repository files navigation

hovor-execution-monitor

This repository contains the the logic of dialogue planner. It is deployed as a Flask python application with an SQLAlchemy database that is supposed to store solutiions generated by planner.

1. Running the app

Local Run

To avoid potential tensorflow configuration errors (and to allow for GPU support) install tensorflow using their tutorial. If you do so, run the following steps within the created conda environment. Otherwise, we still recommend using a virtual environment like conda or venv.

Install the dependencies listed in the requirements.txt file to be able to run the app locally.

pip install -r requirements.txt
python -m spacy download en_core_web_md

a. CLI (no API endpoints; only use for quick testing)

Run the following in the terminal. You can replace local_data/updated_gold_standard_bot with the path to any directory populated with valid plan4dial generated files.

python contingent_plan_executor/local_main.py local_data/updated_gold_standard_bot

b. With API endpoints

Run the following in the terminal. You can replace local_data/updated_gold_standard_bot with the path to any directory populated with valid plan4dial generated files.

python contingent_plan_executor/app.py local_data/updated_gold_standard_bot

Docker Run (Recommended)

With Docker installed, run this command in the terminal to build the image:

docker build -t hovor:latest .

Once the image is built, you have two options for running the container:

a. In-Memory/Unmounted Run:

Use this version if you want to run a conversation that deletes once the container is removed. (This is most ideal for testing).

docker run -it --rm -p 5000:5000 -d hovor:latest local_data/updated_gold_standard_bot

b. Persistent/Mounted Run:

Use this version to persist conversation data.

docker run -it --rm -p 5000:5000 -d -v convo_data:/data hovor:latest local_data/updated_gold_standard_bot

2. Rundown of API Endpoints

Below are the various sub-endpoints and explanation of the services they provide along with necessary input/output to/from them. Note that for any endpoint, an unsuccessful call will have an "error" status, and you can view the "msg" to see what went wrong.

/new-conversation

GET:

Begins a new conversation. Returns the agent's message(s) under "msg" and the user's id under "user_id". Be sure to store the "user_id" so you can load your conversation later!

POST:

parameters:
  • "user_id": the user_id that identifies your save slot Begins a new conversation, but overwrites the existing user save slot if it exists. Returns the same as the GET request, but keeps the same "user_id".

/new-message

POST:

parameters:
  • "user_id": the user_id that identifies your save slot
  • "msg": the message you want to send to the agent Sends a message to the agent speaking to the given "user_id". Returns the agent's message(s) under "msg". Other diagnostic information like "confidence" is also returned.

/load-conversation

POST:

parameters:
  • "user_id": the user_id that identifies your save slot

Loads the conversation of the given "user_id" in its most recent save state. Returns the agent's message(s) under "msg". Call the new-message endpoint to continue the conversation.

View your app at : http://localhost:5000

Send new messages with curl, i.e. :

curl -d '{"user_id":"haz", "msg":"I want to go to Toronto"}' -H "Content-Type: application/json" -X POST http://localhost:5000/new-message

Deploying to our Embeddable Web Interface, WIDGET

Once you have a server running (local or otherwise) see here.

Simulation and Evaluation

This allows for chatbot designers to simulate conversations, and analyze problematic ones.

Setup for Simulation and Evaluation

To run simulation and evaluation, some libraries will be needed in addition to the normal libraries for hovor. In my experience, the best way to do this is with conda to manage your environments, but the same package list should work for pip approaches.

To create a working env with conda, follow these steps:

  1. conda create --name hovor-sim python=3.8.15
  2. conda activate hovor-sim
  3. conda install pip
  4. pip install -r requirements_sim.txt
  5. python -m spacy download en_core_web_md

You can see the packages that will be installed the in the file requirements_sim.txt.
If you have difficulties with package versions, you can view the text file conda_list.txt which contains the output of conda list for my working conda environment on ubuntu. You can likely specify these package versions in the requirements file to fix any conflicts.

Run Simulation and Evaluation

The best way to apply simulation and evaluation, is to use our user interface. With your simulation environment activated, you will run the command: streamlit run local_simulate_evaluate_streamlit.py and once the paths are correct, it will run these processes.

Development

If you want to make your own outcome determiner, start by looking at the DefaultSystemOutcomeDeterminer. You will need to return a list of tuples that each hold an outcome group and a confidence, as well as update the context with updated variable values (if any). Finally, you will need to specify the conditions for your action to run with this function and this function.

About

This repository contains Contingent Plan Executor for a Declarative specification of a dialogue agent

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 92.2%
  • PDDL 7.7%
  • Dockerfile 0.1%