This repository contains code for running local Retrieval Augmented Generation (RAG) applications. It uses Ollama for LLM operations, Langchain for orchestration, and Milvus for vector storage, it is using Llama3 for the LLM.
Before running this project, ensure you have the following installed:
- Python 3.11 or later
- Docker
- Docker-Compose
Additionally, you will need:
- An API key from Jina AI, which you can obtain here.
- Clone this repository to your local machine:
git clone git@github.com:stephen37/ollama_local_rag.git
cd ollama_local_rag
- Install dependencies
poetry install
- Start Milvus with Docker
docker-compose up -d
To run the different applications, execute the following command in your terminal:
python <file_name.py>
You will be prompted to enter queries, and the system will retrieve relevant answers based on the data processed.
For example, if you want to interact with the data from the French parliament, you can run python rag_french_parliament.py
Feel free to check out Milvus, and share your experiences with the community by joining our Discord.