Question answering system built with vector dbs and LLMs.
Part of the content is implemented with reference to michaelliao/llm-embedding-sample
- Python 3.10
- Docker
-
Clone this repo
-
Install requirements with
pip install -r requirements.txt
-
Startup PostgreSQL with Docker
docker run -d \ --rm \ --name pgvector \ -p 5432:5432 \ -e POSTGRES_PASSWORD=password \ -e POSTGRES_USER=postgres \ -e POSTGRES_DB=postgres \ -e PGDATA=/var/lib/postgresql/data/pgdata \ -v /path/to/llm-embedding-qa/pg-data:/var/lib/postgresql/data \ -v /path/to/llm-embedding-qa/pg-init-script:/docker-entrypoint-initdb.d \ ankane/pgvector:latest
NOTE: replace /path/to/... with real path.
-
Run
python main.py
, editconfig.yaml
to set yourapi_key
of OpenAI. -
Put your
markdown
format documents indocs
folder.- There are the wiki files of QChatGPT in
docs_examples
folder.
- There are the wiki files of QChatGPT in
-
Run
python main.py
again, it will automatically build the vector database and start the server.
GET /ask
content
: the content of the questionstrict
: (Optional) skip LLM request ifstrict=true
and no related answer found in vector db