-
-
Notifications
You must be signed in to change notification settings - Fork 77
Adding openai configuration and new responses api #84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Pull Request Review: Adding OpenAI Configuration and Responses APIOverviewThis PR adds support for OpenAI's Responses API alongside the existing Chat Completions API, introducing a prioritized provider selection system (OpenAI → Azure → OpenRouter). The implementation is well-structured with good test coverage. ✅ StrengthsCode Quality
Architecture
🐛 Potential Issues1. API Type Selection Logic - Potential Bug
|
|
Test comment |
Code Review for PR #84: Adding OpenAI Configuration and New Responses APISummaryThis PR adds support for OpenAI's Responses API alongside the existing Chat Completions API, providing users with multiple AI provider options. The implementation is well-structured with good test coverage. ✅ Strengths
|
🐛 Potential Issues1. Incorrect Content Type in Response Parsing (ai_client.go:392-396) The code checks for content.Type == "output_text", but based on the test mock at line 93, the actual content type should be "text" not "output_text". This mismatch could cause the fallback parsing to fail. Suggested fix: Change the condition to content.Type == "text" 2. Store Field Hardcoded to True (ai_client.go:311) The Store: true field is always enabled for statefulness, but users may not want their conversations stored for privacy/cost reasons. No configuration option exists to control this behavior. Recommendation: Add a configuration option or set to false by default. 3. Default Model gpt-5 May Not Exist (config_helpers.go:106) The code defaults to gpt-5 when OpenAI API key is configured but no model is specified. If this model doesn't exist yet, API calls will fail. Suggested fix: Default to gpt-4o which is known to exist |
🔒 Security ConcernsPositive:
Areas for improvement:
Recommendation: Add basic validation to catch configuration errors early and be more selective about what gets logged from error responses. ⚡ Performance Considerations
|
📋 Code Quality & Test CoverageCode Quality - Strong:
Test Coverage - Good but could be better: Strong points:
Missing tests:
|
📝 Minor Suggestions
|
🎯 RecommendationsBefore merging (critical):
Future improvements:
VerdictThis is a solid addition that follows the codebase patterns well. The main concern is the content type mismatch which could cause runtime failures. Once that's addressed, this PR will be ready to merge. Recommendation: Request changes for the content type fix, then approve. 🤖 This review was generated with assistance from Claude Code |
5b6bbd9 to
b3a9dad
Compare
|
|
Code Review: Adding OpenAI Configuration and New Responses APISummaryThis PR adds support for OpenAI's Responses API alongside the existing Chat Completions API, introducing a provider priority system (OpenAI → Azure OpenAI → OpenRouter). Overall, the implementation is solid with good test coverage. Below are my findings across different categories. ✅ Strengths
🐛 Potential Bugs & IssuesCritical1. Session Override Inconsistency in The func (m *Manager) GetModel() string {
// BUG: Uses m.Config.OpenAI.APIKey directly instead of m.GetOpenAIAPIKey()
if m.Config.OpenAI.APIKey != "" {
if m.Config.OpenAI.Model != "" {
return m.GetOpenAIModel() // Good - uses getter
}
return "gpt-5-codex"
}
// Same issue with Azure
if m.Config.AzureOpenAI.APIKey != "" { ... }
// OpenRouter correctly uses getter
return m.GetOpenRouterModel()
}Fix: Use func (m *Manager) GetModel() string {
if m.GetOpenAIAPIKey() != "" {
model := m.GetOpenAIModel()
if model != "" {
return model
}
return "gpt-5-codex"
}
// ... similar for Azure
}Medium2. API Key Check Inconsistency in The method uses 3. Missing Validation for Model Name (config_helpers.go:128) The default model 4. Store Parameter Hardcoded (ai_client.go:311) Store: true, // Enable statefulness by defaultThis is hardcoded without a configuration option. Consider making this configurable since statefulness has implications for API usage and costs. 🎯 Code Quality & Best PracticesGood Practices✅ Proper use of Suggestions1. Type Safety for ResponseInput (ai_client.go:53) type ResponseInput interface{}Using type ResponseInput []MessageThis makes the API clearer and catches type errors at compile time. 2. Magic Strings in Response Parsing (ai_client.go:391-393) if (content.Type == "output_text" || content.Type == "text") && content.Text != "" {Define constants for these type strings: const (
ResponseTypeMessage = "message"
ResponseTypeReasoning = "reasoning"
ContentTypeOutputText = "output_text"
ContentTypeText = "text"
StatusCompleted = "completed"
)3. Duplicate Error Logging (ai_client.go:280, 402) Both methods log "No response content" errors. Consider extracting to a shared helper or standardizing the format. ⚡ Performance ConsiderationsMinor Optimizations1. String Concatenation in URL Building (ai_client.go:319) url := baseURL + "/responses"For consistency with other parts of the codebase, consider using 2. Repeated Config Access The No Major Concerns
🔒 Security ConcernsGood Security Practices✅ API keys properly masked in config display (config_helpers.go:214-220) Recommendations1. API Key Validation Consider adding validation for API key format at configuration load time to catch configuration errors early: func validateAPIKey(key string, provider string) error {
if key == "" {
return nil // Empty is ok, just won't be used
}
// Add provider-specific validation
// e.g., OpenAI keys start with "sk-"
return nil
}2. Response Size Limits The code reads entire response bodies into memory: body, err := io.ReadAll(resp.Body)Consider adding a size limit to prevent memory exhaustion: body, err := io.ReadAll(io.LimitReader(resp.Body, maxResponseSize))🧪 Test CoverageExcellent Coverage ✅
Suggested Additional Tests
📝 DocumentationExcellent Documentation ✅The README updates are thorough and well-structured:
Minor SuggestionIn README.md line 117, the model name 🎯 SummaryThis is a well-implemented PR with good structure, testing, and documentation. The main concerns are:
The code is production-ready after addressing the session override issue. Great work on maintaining backward compatibility and adding comprehensive tests! 🚀 Recommendation: Approve with minor changes requested for the session override handling. |
b3a9dad to
dfb624c
Compare
|
|
Pull Request Review: OpenAI Responses API IntegrationOverviewThis PR adds support for OpenAI's Responses API alongside the existing Chat Completions API, with a priority-based provider selection system (OpenAI → Azure → OpenRouter). ✅ StrengthsCode Quality
Best Practices
|
|
hi man what are we doing now ? porting to native tool call ? i gave up one this when i saw the tmux mcp which can do everything this does |
Related #82