This is a Retrieval-Augmented Generation (RAG) application built with .NET 9 and Next.js.
The easiest way to run the entire application is using Docker Compose:
docker compose -f 'docker-compose.yml' up -d --build This will:
- Start the Ollama service and automatically download required models (
nomic-embed-textandllama3.1:8b) - Start the Qdrant vector database
- Build and run the API with the embedded Next.js frontend
The application will be available at: http://localhost:8080
Note: The first startup will take longer as Ollama downloads the required models (~4-5GB).
If you are using macOS, it is recommended to run Ollama locally and point the RAG application to this local instance instead of running Ollama in Docker. This is due to performance constraints of running Ollama in Docker on Mac.
To connect the RAG application running in Docker to your local Ollama instance:
- Install and run Ollama locally (see Manual Setup section below)
- Update the Ollama URL in the API configuration to:
http://host.docker.internal:11434 - Disable Ollama which may run in docker on the same port.
This setup provides significantly better performance for embeddings and model inference on macOS.
If you prefer to run the services manually:
Download Ollama
For macOS, visit: https://ollama.com/download/mac
Or install via Homebrew:
brew install ollamaollama pull nomic-embed-textollama pull llama3.1:8bPort 6333 is used to open the qdrant manager in browser Port 6334 is used for grpc client
docker run -p 6333:6333 -p 6334:6334 qdrant/qdrantRun application dotnet run in /api project. Open http://localhost:5067 to see the dashboard. Open Data sources and add some documents.
Open 'Create Assistant' page, add data sources and create the assistant.
Open the chat for the assistant and start searching across the data sources.
- Setup account in https://cloud.qdrant.io/
- Create cluster, copy qdrant URL and API Key.
- Setup OpenAI account
- Create OpenAI service account https://platform.openai.com/api-keys
- Deploy the api application to cloud (Azure) with docker
- Configure environment variables to point OpenAI API and qdrant, set
SERVICE_PROVIDERto openai - Set
DATA_STORAGE_CONNECTION_STRINGto /home/rag.db. /home is a folder which is preserved and not removed after service restart. - Set
WEBSITES_ENABLE_APP_SERVICE_STORAGEtotrueto save /home folder event after service restart. - Run the application.
This project is licensed under the Apache License 2.0 - see the LICENSE file for details.