-
-
Notifications
You must be signed in to change notification settings - Fork 77
fix: remove timeout from squash summarization to support local LLMs #85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Removed the hard-coded 30-second timeout that was causing context deadline exceeded errors when using local LLM instances with large contexts. The summarization request now uses an unbounded context, allowing it to complete regardless of how long the local LLM takes. Fixes #80 Co-authored-by: Alvin <alvinunreal@users.noreply.github.com>
|
@claude add also in readme how to isntall from maste latest version |
|
Claude finished @alvinunreal's task —— View job Installation instructions added ✓
SummaryI've added a new section to the README titled "Install from Master (Latest Development Version)" with instructions on how to install TmuxAI from the master branch. The new section includes:
The changes have been committed and pushed to this PR branch. Files changed:
Job Run | Branch: |
Added a new section to the README explaining how to install TmuxAI from the master branch for users who want the latest development version. Includes both manual build and go install methods. Co-authored-by: Alvin <alvinunreal@users.noreply.github.com>
Pull Request Review: Remove Timeout from Squash SummarizationSummaryThis PR removes the hard-coded 30-second timeout from the squash summarization feature to support local LLM instances that may require more time to process large contexts. The change is minimal and directly addresses issue #80. ✅ Code Quality & Best PracticesStrengths:
Suggestions:
🐛 Potential Bugs & IssuesCritical Concern:
⚡ Performance ConsiderationsObservations:
Suggestions:
🔒 Security ConcernsLow Risk:
Considerations:
🧪 Test CoverageCurrent State:
Recommendations:
📋 Overall AssessmentRating: While this PR solves the immediate issue for local LLM users, it introduces a potential hang/freeze scenario with no recovery mechanism. The change trades one problem (timeout too short) for another (no timeout at all). Recommended Actions:
Alternative Approach: 💡 Suggested Config Addition# config.example.yaml
squash:
# Timeout for summarization requests in seconds
# Set to 0 for no timeout (useful for local LLMs with large contexts)
# Default: 300 (5 minutes)
timeout: 300Great work on identifying and addressing the issue! With the recommended improvements, this PR will provide a robust solution for both cloud and local LLM users. 🚀 cc @alvinunreal |
Code Review: PR #85 - Remove timeout from squash summarizationSummaryThis PR addresses issue #80 by removing the hard-coded 30-second timeout from the squash summarization functionality to support local LLM instances that may take longer to process large contexts. ✅ Positive Aspects
🔍 Concerns & Recommendations1. Potential for Indefinite Hangs (High Priority)Issue: Using
Impact: Users would have no way to recover except Recommendation: Consider a configurable timeout approach: // In config.example.yaml, add:
squash_timeout: 300 # Timeout in seconds for squash operations, 0 for no timeout
// In internal/squash.go:
func (m *Manager) summarizeChatHistory(messages []ChatMessage) (string, error) {
// ... existing code ...
// Create a context for the summarization request
var ctx context.Context
var cancel context.CancelFunc
timeout := m.Config.SquashTimeout
if timeout > 0 {
ctx, cancel = context.WithTimeout(context.Background(), time.Duration(timeout)*time.Second)
defer cancel()
} else {
ctx = context.Background()
}
summary, err := m.AiClient.GetResponseFromChatMessages(ctx, summarizationMessage, m.GetOpenRouterModel())
// ... rest of code ...
}This approach:
2. User Experience During Long Operations (Medium Priority)Issue: The spinner that indicates processing ( Recommendation: Consider implementing:
3. Test Coverage (Medium Priority)Observation: While there's a test for Recommendation: Add tests covering: // Test that squash completes successfully with no timeout
func TestSummarizeChatHistory_NoTimeout(t *testing.T) { ... }
// Test context cancellation handling
func TestSummarizeChatHistory_ContextCancellation(t *testing.T) { ... }4. README Documentation (Minor)Observation: The new "Install from Master" section is helpful, but it's worth noting:
Suggestion: ### Install from Master (Latest Development Version)
To install the latest development version directly from the master branch:
```bash
# Clone the repository
git clone https://github.com/alvinunreal/tmuxai.git
cd tmuxai
# Build from source (requires Go 1.21 or later)
go build -o tmuxai🔒 Security AssessmentNo significant security concerns identified. The change doesn't introduce new attack vectors:
⚡ Performance ConsiderationsImproved for Target Use Case:
Consideration: Monitor for issues where squash operations consume significant resources over extended periods. 📊 Overall AssessmentStatus: ✅ Approve with Recommendations This PR successfully solves the immediate problem reported in issue #80. The code change is clean, well-documented, and minimally invasive. However, I strongly recommend implementing a configurable timeout (as outlined above) to provide users with more control and prevent indefinite hangs in edge cases. Suggested Path Forward:
Priority: Medium-High (fixes a blocking issue for local LLM users) 🤖 Automated review by Claude Code |
Code Review for PR #85: Remove timeout from squash summarizationSummaryThis PR addresses issue #80 by removing the hard-coded 30-second timeout in the summarizeChatHistory function, which was causing context deadline errors when using local LLM instances with large contexts. ✅ Positive Aspects1. Problem Identification & Solution
2. Code Quality
3. Documentation
|
Removed the hard-coded 30-second timeout that was causing context deadline exceeded errors when using local LLM instances with large contexts. The summarization request now uses an unbounded context, allowing it to complete regardless of how long the local LLM takes.
Fixes #80