A Retrieval-Augmented Generation library with a CLI interface. Build RAG applications with just a few commands and a configuration file.
Databases | LLMs | Embeddings | Document types |
---|---|---|---|
Chroma (local) | OpenAI | OpenAI | |
Pinecone (remote) | AzureOpenAI | AzureOpenAI |
For more details see the documentation.
To install, run
pip install ragcore
or clone and build from source
git clone https://github.com/daved01/ragcore.git
cd ragcore
pip install .
If everything worked, running
ragcore -h
should show you some information about ragcore
.
To build an application with OpenAI or AzureOpenAI LLMs and embeddings, and a local database, first set your OpenAI API key as described here:
export OPENAI_API_KEY=[your token]
Then, create a config file config.yaml
like this in the root of your project:
database:
provider: "chroma"
number_search_results: 5
base_dir: "data/database"
splitter:
chunk_overlap: 256
chunk_size: 1024
embedding:
provider: "openai"
model: "text-embedding-model"
llm:
provider: "openai"
model: "gpt-model"
And finally, create your application using this config file:
from ragcore import RAGCore
app = RAGCore() # pass config=<path-to-config.yaml> if not in root
# Upload a document "My_Book.pdf"
app.add(path="My_Book.pdf")
# Now you can ask questions
answer = app.query(query="What did the elk say?")
print(answer.content)
# List the document's title and content on which the response is based
for doc in answer.documents:
print(doc.title, " | ", doc.content)
# List all documents in the database
print(app.get_titles())
# You can delete by title
app.delete(title="My_Book")
And that's it! For more information, as well as an overview of supported integrations check out the documentation.