RAG assistant for Canadian refugee law using:
- FastAPI backend
- LangChain orchestration
- Snowflake Cortex Search retrieval
- Gemini generation + embeddings
- Next.js + Tailwind chat frontend
backend/main.py: FastAPI app and/api/chatendpointbackend/services/rag.py: startup-initialized LangChain + Snowflake retrieval servicebackend/schemas/chat.py: request/response modelsfrontend/: Next.js chat UI with citation cards
Copy .env.example to .env and fill in:
GEMINI_API_KEYSNOWFLAKE_ACCOUNTSNOWFLAKE_USERSNOWFLAKE_PASSWORDSNOWFLAKE_CORTEX_SEARCH_SERVICE
Recommended to also set:
SNOWFLAKE_WAREHOUSE,SNOWFLAKE_DATABASE,SNOWFLAKE_SCHEMA,SNOWFLAKE_ROLESNOWFLAKE_CORTEX_CONTENT_FIELD,SNOWFLAKE_CORTEX_CASE_NAME_FIELD,SNOWFLAKE_CORTEX_SOURCE_URL_FIELDRAG_TOP_KNEXT_PUBLIC_API_BASE_URL
python -m venv .venv
# Activate venv, then:
pip install -r requirements.txt
uvicorn backend.main:app --reload --host 127.0.0.1 --port 8000cd frontend
npm install
npm run devOpen http://localhost:3000.
POST /api/chat
Request:
{
"query": "What is the legal test for X?"
}Response:
{
"answer": "string",
"citations": [
{
"case_name": "string",
"url": "string",
"relevance_score": 0.92
}
]
}