RAGLight is a lightweight and modular Python library for implementing Retrieval-Augmented Generation (RAG). It enhances the capabilities of Large Language Models (LLMs) by combining document retrieval with natural language inference.
Designed for simplicity and flexibility, RAGLight provides modular components to easily integrate various LLMs, embeddings, and vector stores, making it an ideal tool for building context-aware AI solutions.
Actually RAGLight supports :
- Ollama
- LMStudio
- Mistral API
You need to have Ollama or LMStudio running on your computer. Or you need to have a Mistral API key.
If you use LMStudio, ou need to have the model you want to use loaded in LMStudio.
- Embeddings Model Integration: Plug in your preferred embedding models (e.g., HuggingFace all-MiniLM-L6-v2) for compact and efficient vector embeddings.
- LLM Agnostic: Seamlessly integrates with different LLMs from different providers (Ollama and LMStudio supported).
- RAG Pipeline: Combines document retrieval and language generation in a unified workflow.
- RAT Pipeline: Combines document retrieval and language generation in a unified workflow. Add reflection loops using a reasoning model like Deepseek-R1 or o1.
- Agentic RAG Pipeline: Use Agent to improve your RAG performances.
- Flexible Document Support: Ingest and index various document types (e.g., PDF, TXT, DOCX, Python, Javascript, ...).
- Extensible Architecture: Easily swap vector stores, embedding models, or LLMs to suit your needs.
If you want to install library, use :
pip install raglight
You can set several environment vaiables to change RAGLight settings :
MISTRAL_API_KEY
if you want to use Mistral APIOLLAMA_CLIENT_URL
if you have a custom Ollama URLLMSTUDIO_CLIENT
if you have a custom LMStudio URL
Knowledge Base
Knowledge Base is a way to define data you want to ingest inside your vector store during the initialization of your RAG.
It's the data ingest when you call build
function :
from raglight import RAGPipeline
pipeline = RAGPipeline(knowledge_base=[
FolderSource(path="<path to your folder with pdf>/knowledge_base"),
GitHubSource(url="https://github.com/Bessouat40/RAGLight")
],
model_name="llama3",
provider=Settings.OLLAMA,
k=5)
pipeline.build()
You can define two different knowledge base :
- Folder Knowledge Base
All files/folders into this directory will be ingested inside the vectore store :
from raglight import FolderSource
FolderSource(path="<path to your folder with pdf>/knowledge_base"),
- Github Knowledge Base
You can declare Github Repositories you want to store into your vector store :
from raglight import GitHubSource
GitHubSource(url="https://github.com/Bessouat40/RAGLight")
RAG
You can setup easily your RAG with RAGLight :
from raglight.rag.simple_rag_api import RAGPipeline
from raglight.models.data_source_model import FolderSource, GitHubSource
from raglight.config.settings import Settings
from raglight.config.rag_config import RAGConfig
Settings.setup_logging()
knowledge_base=[
FolderSource(path="<path to your folder with pdf>/knowledge_base"),
GitHubSource(url="https://github.com/Bessouat40/RAGLight")
],
config = RAGConfig(
embedding_model = Settings.DEFAULT_EMBEDDINGS_MODEL,
llm = Settings.DEFAULT_LLM,
persist_directory = './defaultDb',
provider = Settings.OLLAMA,
collection_name = Settings.DEFAULT_COLLECTION_NAME,
file_extension = Settings.DEFAULT_EXTENSIONS,
# k = Settings.DEFAULT_K,
# cross_encoder_model = Settings.DEFAULT_CROSS_ENCODER_MODEL,
# system_prompt = Settings.DEFAULT_SYSTEM_PROMPT,
# knowledge_base = knowledge_base
)
pipeline = RAGPipeline(config)
pipeline.build() # Will ingest knowladge base, not mandatory if not knowledge_base
response = pipeline.generate("How can I create an easy RAGPipeline using raglight framework ? Give me python implementation")
print(response)
You just have to fill the model you want to use.
⚠️ By default, LLM Provider will be Ollama
Agentic RAG
This pipeline extends the Retrieval-Augmented Generation (RAG) concept by incorporating an additional Agent. This agent can retrieve data from your vector store.
You can modify several parameters in your config :
provider
: Your LLM Provider (Ollama, LMStudio, Mistral)model
: The model you want to usek
: The number of document you'll retrievemax_steps
: Max reflexion steps used by your Agentapi_key
: Your Mistral API keyapi_base
: Your API URL (Ollama URL, LM Studio URL, ...)num_ctx
: Your context max_lengthverbosity_level
: You logs verbosity level
from raglight.config.settings import Settings
from raglight.rag.agentic_rag import AgenticRAG
from raglight.config.agentic_rag_config import AgenticRAGConfig
from raglight.config.vector_store_config import VectorStoreConfig
from raglight.config.settings import Settings
from dotenv import load_dotenv
import os
load_dotenv()
Settings.setup_logging()
persist_directory = './defaultDb'
model_embeddings = Settings.DEFAULT_EMBEDDINGS_MODEL
collection_name = Settings.DEFAULT_COLLECTION_NAME
vector_store_config = VectorStoreConfig(
embedding_model = model_embeddings,
persist_directory = persist_directory,
provider = Settings.HUGGINGFACE,
collection_name = collection_name
)
config = AgenticRAGConfig(
provider = Settings.MISTRAL,
model = "mistral-large-2411",
k = 10,
system_prompt = Settings.DEFAULT_AGENT_PROMPT,
max_steps = 4,
api_key = Settings.MISTRAL_API_KEY # os.environ.get('MISTRAL_API_KEY')
# api_base = ... # If you have a custom client URL
# num_ctx = ... # Max context length
# verbosity_level = ... # Default = 2
)
agenticRag = AgenticRAG(config, vector_store_config)
response = agenticRag.generate("Please implement for me AgenticRAGPipeline inspired by RAGPipeline and AgenticRAG and RAG")
print('response : ', response)
RAT
This pipeline extends the Retrieval-Augmented Generation (RAG) concept by incorporating an additional reasoning step using a specialized reasoning language model (LLM).
from raglight.rat.simple_rat_api import RATPipeline
from raglight.models.data_source_model import FolderSource, GitHubSource
from raglight.config.settings import Settings
from raglight.config.rat_config import RATConfig
Settings.setup_logging()
knowledge_base=[
FolderSource(path="<path to the folder you want to ingest into your knowledge base>"),
GitHubSource(url="https://github.com/Bessouat40/RAGLight")
],
config = RATConfig(
embedding_model = Settings.DEFAULT_EMBEDDINGS_MODEL,
cross_encoder_model = Settings.DEFAULT_CROSS_ENCODER_MODEL,
llm = "llama3.2:3b",
k = Settings.DEFAULT_K,
persist_directory = './defaultDb',
provider = Settings.OLLAMA,
file_extension = Settings.DEFAULT_EXTENSIONS,
system_prompt = Settings.DEFAULT_SYSTEM_PROMPT,
collection_name = Settings.DEFAULT_COLLECTION_NAME,
reasoning_llm = Settings.DEFAULT_REASONING_LLM,
reflection = 3
# knowledge_base = knowledge_base,
)
pipeline = RATPipeline(config)
# This will ingest data from the knowledge base. Not mandatory if you have already ingested the data.
pipeline.build()
response = pipeline.generate("How can I create an easy RAGPipeline using raglight framework ? Give me the the easier python implementation")
print(response)
Use Custom Pipeline
1. Configure Your Pipeline
You can also setup your own Pipeline :
from raglight.rag.builder import Builder
from raglight.config.settings import Settings
rag = Builder() \
.with_embeddings(Settings.HUGGINGFACE, model_name=model_embeddings) \
.with_vector_store(Settings.CHROMA, persist_directory=persist_directory, collection_name=collection_name) \
.with_llm(Settings.OLLAMA, model_name=model_name, system_prompt_file=system_prompt_directory, provider=Settings.LMStudio) \
.build_rag(k = 5)
2. Ingest Documents Inside Your Vector Store
Then you can ingest data into your vector store.
- You can use default pipeline that'll ingest no code data :
rag.vector_store.ingest(file_extension='**/*.pdf', data_path='./data')
- Or you can use code pipeline :
rag.vector_store.ingest(repos_path=['./repository1', './repository2'])
This pipeline will ingest code embeddings into your collection : collection_name. But this pipeline will also extract all signatures from your code base and ingest it into : collection_name_classes.
You have access to two different functions inside VectorStore
class : similarity_search
and similarity_search_class
to search into different collection.
3. Query the Pipeline
Retrieve and generate answers using the RAG pipeline:
response = rag.generate("How can I optimize my marathon training?")
print(response)
You can find more examples here : examples.
You can use RAGLight inside a Docker container easily. Find Dockerfile example here : examples/Dockerfile.example
Just go to examples directory and run :
docker build -t docker-raglight -f Dockerfile.example .
In order your container can communicate with Ollama or LMStudio, you need to add a custom host-to-IP mapping :
docker run --add-host=host.docker.internal:host-gateway docker-raglight
We use --add-host
flag to allow Ollama call.