Skip to content
/ R2R Public
forked from SciPhi-AI/R2R

A framework for rapid development and deployment of production-ready RAG systems

License

Notifications You must be signed in to change notification settings

sp6370/R2R

 
 

Repository files navigation

Docs Discord Github Stars Commits-per-week License: MIT

Sciphi Framework

Build, deploy, and optimize your RAG system.

About

R2R, short for RAG to Riches, provides the fastest and most efficient way to provide high quality RAG to end users. The framework is built around customizable pipelines and a feature-rich FastAPI implementation.

Why?

R2R was conceived to bridge the gap between local LLM experimentation and scalable production solutions. R2R is to LangChain/LlamaIndex what NextJS is to React. A JavaScript client for R2R deployments can be found here.

Key Features

  • 🚀 Deploy: Instantly launch production-ready RAG pipelines with streaming capabilities.
  • 🧩 Customize: Tailor your pipeline with intuitive configuration files.
  • 🔌 Extend: Enhance your pipeline with custom code integrations.
  • ⚖️ Autoscale: Scale your pipeline effortlessly in the cloud using SciPhi.
  • 🤖 OSS: Benefit from a framework developed by the open-source community, designed to simplify RAG deployment.

Demo(s)

Using the cloud application to deploy the pre-built basic pipeline:

https://www.loom.com/share/e3b934b554484787b005702ced650ac9

Note - the example above uses SciPhi Cloud to pair with the R2R framework for deployment and observability. SciPhi is working to launch a self-hosted version of their cloud platform as R2R matures.

Links

Join the Discord server

R2R Docs Quickstart

SciPhi Cloud

Quick Install:

# use the `'r2r[all]'` to download all required deps
pip install 'r2r[parsing,eval]'

# setup env 
export OPENAI_API_KEY=sk-...
# Set `LOCAL_DB_PATH` for local testing
export LOCAL_DB_PATH=local.sqlite

# OR do `vim .env.example && cp .env.example .env`
# INCLUDE secrets and modify config.json
# if using cloud providers (e.g. pgvector, qdrant, ...)

Docker:

docker pull emrgntcmplxty/r2r:latest

# Place your secrets in `.env`
docker run -d --name r2r_container -p 8000:8000 --env-file .env r2r

Basic Example

basic_pipeline.py: Execute this script to initiate the default backend server. It establishes a basic RAG pipeline that encompasses ingestion, embedding, and RAG processes, all accessible via FastAPI.

# launch the server
python -m r2r.examples.servers.basic_pipeline

run_basic_client.py: This client script should be executed subsequent to the server startup above. It facilitates the upload of text entries and PDFs to the server using the Python client and demonstrates the management of document and user-level vectors through its built-in features.

# run the client
python -m r2r.examples.clients.run_basic_client

Running Basic Local RAG

Refer here for a tutorial on how to modify the commands above to use local providers.

Synthetic Queries Example

synthetic_query_pipeline.py: Execute this script to start a backend server equipped with an advanced pipeline. This pipeline is designed to create synthetic queries, enhancing the RAG system's learning and performance.

# launch the server
python -m r2r.examples.servers.synthetic_query_pipeline

run_synthetic_query_client.py: Use this client script after the synthetic query pipeline is running. It's tailored for use with the synthetic query pipeline, demonstrating the improved features of the RAG system.

# run the client
python -m r2r.examples.clients.run_synthetic_query_client

Extra Examples

reducto_pipeline.py: Launch this script to activate a backend server that integrates a Reducto adapter for enhanced PDF ingestion.

# launch the server
python -m r2r.examples.servers.reducto_pipeline

web_search_pipeline.py: This script sets up a backend server that includes a WebSearchRAGPipeline, adding web search functionality to your RAG setup.

# launch the server
python -m r2r.examples.servers.web_search_pipeline

Core Abstractions

The framework primarily revolves around three core abstractions:

  • The Ingestion Pipeline: Facilitates the preparation of embeddable 'Documents' from various data formats (json, txt, pdf, html, etc.). The abstraction can be found in ingestion.py and relevant documentation is available here.

  • The Embedding Pipeline: Manages the transformation of text into stored vector embeddings, interacting with embedding and vector database providers through a series of steps (e.g., extract_text, transform_text, chunk_text, embed_chunks, etc.). The abstraction can be found in embedding.py and relevant documentation is available here.

  • The RAG Pipeline: Works similarly to the embedding pipeline but incorporates an LLM provider to produce text completions. The abstraction can be found in rag.py and relevant documentation is available here.

  • The Eval Pipeline: Samples some subset of rag_completion calls for evaluation. Currently DeepEval and Parea are supported. The abstraction can be found in eval.py and relevant documentation is available here.

Each pipeline incorporates a logging database for operation tracking and observability.

About

A framework for rapid development and deployment of production-ready RAG systems

Resources

License

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.7%
  • Dockerfile 0.3%