-
Notifications
You must be signed in to change notification settings - Fork 19
Ollama-CLI integration #68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
WalkthroughUpdates the CLI’s Ollama integration: changes the GenerateCommitMessage call signature to accept apiKey and a fixed model, removes env-based URL/model usage, and revises LLM setup to prompt for URL when Ollama is selected and API key otherwise. Also removes a debug print in the store. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant CLI as CLI (createMsg)
participant Store as Store
participant Ollama as Ollama API
rect rgba(200,230,255,0.3)
note over CLI: New call signature
User->>CLI: generate commit message
CLI->>Store: read config + apiKey
CLI->>Ollama: GenerateCommitMessage(config, changes, apiKey, "llama3:latest")
Ollama-->>CLI: message / error
CLI-->>User: output
end
sequenceDiagram
autonumber
actor User
participant CLI as CLI (llmSetup)
participant Store as Store
alt Model == Ollama
User->>CLI: select Ollama
CLI->>User: prompt for URL
User-->>CLI: enter URL
CLI->>Store: save URL
note right of CLI: Success: URL Updated
else Other models
User->>CLI: select other model
CLI->>User: prompt for API Key
User-->>CLI: enter API Key
CLI->>Store: save API Key
note right of CLI: Success: API Key Updated
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (3)
cmd/cli/createMsg.go (2)
124-124: Consider making the Ollama model configurable.The model is hard-coded to "llama3:latest". While the PR description acknowledges this as temporary, consider adding the model to the configuration to allow users to select different Ollama models without code changes.
If you'd like, I can propose a configuration schema change to support dynamic model selection for Ollama.
124-126: Clarify semantic overload:apiKeycontains URL for Ollama.The variable
apiKey(from line 31) is used to pass the Ollama URL toGenerateCommitMessage. This creates confusion because:
- The variable name
apiKeyimplies it contains an API key- For Ollama, it actually contains the URL (set via the setup flow in
llmSetup.go)- The function parameter was renamed from
urltoapiKey, further obscuring intentThis semantic overload makes the code harder to understand and maintain. Consider one of these approaches:
Option 1: Rename the store field to be more generic (e.g.,
CredentialorConfig) and update variable names accordingly
Option 2: Add URL validation for Ollama to make the usage explicitExample for Option 2 - Add validation before the switch statement:
// Validate Ollama URL format if using Ollama if commitLLM == "Ollama" { if !strings.HasPrefix(apiKey, "http://") && !strings.HasPrefix(apiKey, "https://") { pterm.Error.Printf("Invalid Ollama URL format. Expected http:// or https:// prefix.\n") os.Exit(1) } }Don't forget to add the import:
import ( "strings" // ... other imports )cmd/cli/llmSetup.go (1)
28-31: Consider extracting prompt creation to reduce duplication.The
apiKeyPromptis defined in bothSetupLLMandUpdateLLMwith identical configuration. While this duplication is minor, extracting it to a helper function would improve maintainability.Example helper function:
func createPrompt(label string, mask rune) promptui.Prompt { prompt := promptui.Prompt{ Label: label, } if mask != 0 { prompt.Mask = mask } return prompt }Usage:
// In SetupLLM apiKeyPrompt := createPrompt("Enter API Key", '*') // In UpdateLLM apiKeyPrompt := createPrompt("Enter API Key", '*')Also applies to: 104-106
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
cmd/cli/createMsg.go(1 hunks)cmd/cli/llmSetup.go(4 hunks)cmd/cli/store/store.go(0 hunks)
💤 Files with no reviewable changes (1)
- cmd/cli/store/store.go
🧰 Additional context used
🧬 Code graph analysis (1)
cmd/cli/llmSetup.go (1)
cmd/cli/store/store.go (1)
UpdateAPIKey(317-357)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Build Go Binary (windows-latest)
🔇 Additional comments (2)
cmd/cli/llmSetup.go (2)
99-119: LGTM! Clear option selection logic.The code correctly adapts the prompt options and labels based on the selected model. The logic clearly distinguishes between Ollama (which needs URL configuration) and other models (which need API key configuration).
144-146: Good: Dynamic success message based on model type.The success message correctly reflects whether an API key or URL was updated, improving user experience.
DFanso
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM 🎉
Description
Ollama model can setup using CLI instead of setting the environment variable
Type of Change
Changed the code to use Ollama Url from config.json
Related Issue
Fixes #(issue number)
#66
Changes Made
Testing
Checklist
Screenshots (if applicable)
Additional Notes
Currently Ollama is only local model, so it is hard coded, when other models will be added, model selection can be made dynamic
For Hacktoberfest Participants
Thank you for your contribution! 🎉
Summary by CodeRabbit
New Features
Refactor
Chores