Skip to content

A genius customisable GPT-powered chatbot that can extract Swisscom's juiciest secrets faster than you can say "chocolate fondue" at a Swiss ski lodge.

License

Notifications You must be signed in to change notification settings

cern-lauzhack-2023/lauzcom-assistant

Repository files navigation

Lauzcom Assistant

Lauzcom Assistant is an interactive and user-friendly solution designed to provide seamless access to critical Swisscom data. By integrating powerful GPT models, customers can easily ask questions about public Swisscom data and receive accurate answers swiftly.

Say goodbye to time-consuming manual searches, and let Lauzcom Assistant revolutionise your customer interactions.

Authors

The Lauzcom Assistant project is created by:

Table of contents

Demo

Demo video

Architecture

Architecture

Quick start

Note

Make sure you have Docker installed

On macOS or Linux, run:

./setup.sh

It installs all the dependencies and allows you to download a model locally or use OpenAI. LauzHack Assistant now runs at http://localhost:5173.

Otherwise, follow these steps:

  1. Download and open this repository with git clone git@github.com:cern-lauzhack-2023/Lauzcom-Assistant.git.

  2. Create a .env file in your root directory and set the env variable API_KEY with your OpenAI API key and VITE_API_STREAMING to true or false, depending on whether you want streaming answers or not.

    API_KEY=<YourOpenAIKey>
    VITE_API_STREAMING=true

    See optional environment variables in the /.env-template and /application/.env_sample files.

  3. Run ./run-with-docker-compose.sh.

  4. LauzHack Assistant now runs at http://localhost:5173.

To stop, press Ctrl + C.

Development environment

Run Mongo and Redis

For development, only two containers are used from docker-compose.yaml (by deleting all services except for Redis and Mongo). See file docker-compose-dev.yaml.

Run:

docker compose -f docker-compose-dev.yaml build
docker compose -f docker-compose-dev.yaml up -d

Run the backend

Note

Make sure you have Python 3.10 or 3.11 installed.

  1. Export required environment variables or prepare a .env file in the /application folder.
    • Copy .env_sample and create .env with your OpenAI API token for the API_KEY and EMBEDDINGS_KEY fields.

(check out application/core/settings.py if you want to see more config options.)

  1. (optional) Create a Python virtual environment: Follow the Python official documentation for virtual environments.

    a) On Linux and macOS:

    python -m venv venv
    . venv/bin/activate

    b) On Windows:

    python -m venv venv
    venv/Scripts/activate
  2. Install dependencies for the backend:

pip install -r application/requirements.txt
  1. Run the app using:
flask --app application/app.py run --host=0.0.0.0 --port=7091

The backend API now runs at http://localhost:7091.

  1. Start worker with:
celery -A application.app.celery worker -l INFO

Run the frontend

Note

Make sure you have Node version 16 or higher.

  1. Navigate to the /frontend folder.
  2. Install the required packages husky and vite (ignore if already installed).
npm install husky -g
npm install vite -g
  1. Install dependencies by running:
npm install --include=dev
  1. Run the app using:
npm run dev

The frontend now runs at http://localhost:5173.

License

The source code license is MIT, as described in the LICENSE file.

Built with 🐦 🔗 LangChain

About

A genius customisable GPT-powered chatbot that can extract Swisscom's juiciest secrets faster than you can say "chocolate fondue" at a Swiss ski lodge.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published