Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/add local llama tut #278

Merged
merged 4 commits into from
Apr 5, 2024
Merged

Conversation

emrgnt-cmplxty
Copy link
Contributor

@emrgnt-cmplxty emrgnt-cmplxty commented Apr 5, 2024

Ellipsis 🚀 This PR description was created by Ellipsis for commit b0551ad.

Summary:

This PR adds support for local LLM providers ollama and Llama.cpp in the R2R framework, along with updates to the documentation, example code, and configuration files to guide users on how to use this new feature.

Key points:

  • Added support for local LLM providers ollama and Llama.cpp in /r2r/llms/__init__.py and /r2r/llms/llama_cpp/base.py.
  • Updated config.json examples to include configurations for local LLM providers in /r2r/examples/configs/local_ollama.json and /r2r/examples/configs/local_llama_cpp.json.
  • Updated server setup script /r2r/examples/servers/basic_pipeline.py to allow selection of configuration based on the desired LLM provider.
  • Updated client example /r2r/examples/clients/run_basic_client.py to demonstrate how to use the new local LLM providers.
  • Updated documentation in /docs/pages/getting-started/basic-example.mdx, /docs/pages/getting-started/configure-your-pipeline.mdx, /docs/pages/providers/evals.mdx, and /docs/pages/providers/llms.mdx to reflect the new feature.
  • Added a new tutorial /docs/pages/tutorials/local_rag.mdx on how to run a local RAG pipeline with R2R.

Generated with ❤️ by ellipsis.dev

Copy link

vercel bot commented Apr 5, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Comments Updated (UTC)
r2r-docs ✅ Ready (Inspect) Visit Preview 💬 Add feedback Apr 5, 2024 9:43pm

@emrgnt-cmplxty emrgnt-cmplxty marked this pull request as ready for review April 5, 2024 21:42
@emrgnt-cmplxty emrgnt-cmplxty merged commit fe1fba2 into main Apr 5, 2024
1 of 2 checks passed
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❌ Changes requested.

  • Reviewed the entire pull request up to b0551ad
  • Looked at 1032 lines of code in 26 files
  • Took 1 minute and 30 seconds to review
More info
  • Skipped 0 files when reviewing.
  • Skipped posting 0 additional comments because they didn't meet confidence threshold of 50%.

Workflow ID: wflow_oP3SSoC2WrK7CgfW


Want Ellipsis to fix these issues? Tag @ellipsis-dev in a comment. We'll respond in a few minutes. Learn more here.

@@ -51,21 +49,22 @@ def search(self, query):
print(body[:500])
print("\n")

def rag_completion(self, query):
def rag_completion(self, query, model="gpt-4-turbo-preview"):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The model parameter is hardcoded to 'gpt-4-turbo-preview'. Consider making this a configurable parameter to allow users to specify the model they want to use.

Suggested change
def rag_completion(self, query, model="gpt-4-turbo-preview"):
def rag_completion(self, query, model):

@emrgnt-cmplxty emrgnt-cmplxty deleted the feature/add-local-llama-tut branch April 6, 2024 04:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant