Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary:
Extended CLI provider checks and updated various files to support additional LLM providers, improve error handling, and rename certain classes and functions for consistency.
Key points:
compose.yaml
to include environment variables for multiple LLM providers (OpenAI, Anthropic, Azure, Google Vertex AI, AWS Bedrock, Groq, Cohere, Anyscale).r2r.json
andr2r/base/abstractions/llm.py
fromgpt-4o
toopenai/gpt-4o
.rag_assistant
torag_agent
inr2r/main/abstractions.py
,r2r/main/assembly/builder.py
,r2r/main/assembly/factory.py
, andr2r/main/services/retrieval_service.py
.check_llm_reqs
function inr2r/cli/utils/docker_utils.py
to validate environment variables for LLM providers.r2r/providers/embeddings/litellm.py
andr2r/providers/embeddings/ollama.py
to improve error handling by raisingR2RException
with specific error messages.r2r/providers/prompts/defaults.jsonl
to renamerag_assistant
prompt torag_agent
.Generated with ❤️ by ellipsis.dev