This project is not maintained as of 2020. You'll find all the necessary micro services to build this project here:
- https://github.com/plattenschieber/GP_RASA_action-server
- https://github.com/plattenschieber/GP_RASA_core-model-server
- https://github.com/plattenschieber/GP_RASA_nlu
- https://github.com/plattenschieber/GP_RASA_example-chatbot
- https://github.com/plattenschieber/GP_RASA_kfz-chatbot
- https://github.com/plattenschieber/GP_RASA_rasa-ui
- https://github.com/plattenschieber/GP_RASA_webchat
Core project which represent the rasai.ai core. Can be used as base for domain specific chatbots of different purpose. For this the project offers a variety of start configurations.
- bamboo-specs describes the build pipeline for a bamboo server
- config contains the endpoint-configuration in form of yaml files
- docker Contains compose-files to start the service
- src contains the different scripts which will be executed on contaienr start up(starting point should always be "start_core.py")
The server can be configured with a variety of environment variables:
- SOCKET_PORT Set the port on which the server will listen (Default: 5005)
- DIALOGUE_MODEL_DIR Set the path to the local model (Default: models/dialogue)
- ENABLE_DEBUG Set logging level to debug (Default: Info)
- ENDPOINTS_CONFIG_FILE Set path to endpoint configuration (Default: config/config/endpoints.prod.yaml)
- MODE Set the mode in which the server will start. Possible values are "local_server", "prod_server", "offline_trainer" and "online_trainer" (Default: prod_server)
The use the system locally the requirments must be installed and the different environment should be set to local.
pip install -r requirements.txt
Then start the start_core script
python ./src/start_core.py
you can run the build by the following command. We will tag the image with our docker registry url and the projects name.
docker build -t docker.nexus.gpchatbot.archi-lab.io/chatbot/core .
In order to run the core-project you will need to create a docker network which is called 'chatbot'. You can do this by running the following command:
docker network create chatbot
The project could be started in different modes with different purposes, this should mostly happen in the domain specific projects. Therefore is here only the explanation to start the production server
- prod - in production mode the bot will fetch a model from an external server and supplies an api to interact with. You can start the bot with the following command:
docker-compose -f docker/docker-compose.yaml -f docker/docker-compose.prod.yaml up -d