-
Notifications
You must be signed in to change notification settings - Fork 19
updated README.md to include the ollama setup #51
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughREADME.md updated to document Ollama (local) as a supported LLM, add OLLAMA_URL and OLLAMA_MODEL environment variables with defaults, and provide step-by-step local Ollama setup and example commands; minor textual/punctuation edits applied. Changes
Sequence Diagram(s)No sequence diagram provided — changes are documentation-only and do not modify runtime control flow. Estimated code review effort🎯 1 (Trivial) | ⏱️ ~3 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches✅ Passed checks (5 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used🪛 markdownlint-cli2 (0.18.1)README.md51-51: Bare URL used (MD034, no-bare-urls) Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
README.md(3 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.18.1)
README.md
51-51: Bare URL used
(MD034, no-bare-urls)
| ```bash | ||
| export COMMIT_LLM=ollama | ||
| export OLLAMA_MODEL=llama3 # llama3 by default | ||
| ``` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Resolve conflicting Ollama default model docs.
Line 52 documents OLLAMA_MODEL defaulting to qwen2:0.5b, but Line 198 claims the default is llama3. Please pick the correct value and update both locations so readers aren’t misconfigured.
🤖 Prompt for AI Agents
In README.md around lines 52 and 196-199, the documented default for
OLLAMA_MODEL is inconsistent (qwen2:0.5b at line 52 vs llama3 at ~198); choose
the correct canonical default value for OLLAMA_MODEL and update both occurrences
to match exactly (including the example export lines and any inline comment),
and search the rest of the README for other mentions of OLLAMA_MODEL to make
them consistent as well.
DFanso
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice 🎊
Description
Type of Change
Related Issue
Fixes #47
Changes Made
Testing
Checklist
Screenshots (if applicable)
Additional Notes
For Hacktoberfest Participants
Thank you for your contribution! 🎉
Summary by CodeRabbit