Skip to content

IDinsight/aaq-core

Repository files navigation

logo logo

Developer Docs | Features | Usage | Architecture | Funders and Partners

Ask A Question is a free and open-source tool created to help non-profit organizations, governments in developing nations, and social sector organizations use Large Language Models for responding to citizen inquiries in their native languages.

🤸‍♀️ Features

❓ LLM-powered search

Match your questions to content in the database using embeddings from LLMs.

🤖 LLM responses

Craft a custom reponse to the question using LLM chat and the content in your database

💬 Deploy on whatsapp

Easily deploy using WhatsApp Business API

📚 Manage content

Use the Admin App to add, edit, and delete content in the database

🚧 Upcoming

🌍 Support for low resourced language

Ask questions in local languages. Languages currently on the roadmap

  • Xhosa
  • Zulu
  • Hindi
  • Igbo

💬 Conversation capability

Refine or clarify your question through conversation

📹 Multimedia content

Respond with not just text but voice, images, and videos as well.

🚨 Message Triaging

Identify urgent or important messages. Handle them differently.

🧑‍💻 Engineering dashboard

Monitor uptime, response rates, throughput HTTP reponse codes and more

🧑‍💼 Content manager dashboard

See which content is the most sought after, the kinds of questions that receive poor feedback, identify missing content, and more

Note

Looking for other features? Please raise an issue with [FEATURE REQUEST] before the title.

Usage

There are two major endpoints for Question-Answering:

  • Embeddings search: Finds the most similar content in the database using cosine similarity between embeddings.
  • LLM response: Crafts a custom response using LLM chat using the most similar content.

See docs or SwaggerUI at https://<DOMAIN>/api/docs or https://<DOMAIN>/docs for more details and other API endpoints.

❓ Embeddings search

curl -X 'POST' \
  'https://[DOMAIN]/api/embeddings-search' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <BEARER TOKEN>' \
  -H 'Content-Type: application/json' \
  -d '{
  "query_text": "how are you?",
  "query_metadata": {}
}'

🤖 LLM response

curl -X 'POST' \
  'https://[DOMAIN]/api/llm-response' \
  -H 'accept: application/json' \
  -H 'Authorization: Bearer <BEARER TOKEN>' \
  -H 'Content-Type: application/json' \
  -d '{
  "query_text": "this is my question",
  "query_metadata": {}
}'

📚 Manage content

You can access the admin console at

https://[DOMAIN]/

Architecture

We use docker-compose to orchestrate containers with a reverse proxy that manages all incoming traffic to the service. The database and LiteLLM proxy are only accessed by the core app.

Flow

Documentation

See here for full documentation.

Funders and Partners

google_dot_org