Conversation
When enabled, the commit-msg hook now uses Claude to generate richer AI-* trailers by analyzing the staged diff alongside the commit message. Changes: - Add `enrich` (default: true) and `enrichTimeout` (default: 5000) to ICommitMsgConfig - Add AI-Source trailer to indicate 'llm-enrichment' or 'heuristic' - Update DI container to wire llmClient to CommitMsgHandler - Implement attemptEnrichment() with timeout and fallback - Add selectBestFact() to pick highest priority fact from LLM results - Add comprehensive unit tests (13 tests) Gracefully falls back to heuristic analysis when: - No ANTHROPIC_API_KEY set (llmClient is null) - enrich config is false - LLM call times out (default 5s) - LLM returns empty facts or throws error Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> AI-Agent: Claude-Code/2.1.42 AI-Model: claude-opus-4-5-20251101 AI-Decision: add LLM enrichment to commit-msg hook (GIT-87). When enabled, the commit-msg hook now uses Claude to generate richer AI-Confidence: medium AI-Tags: application, handlers, domain, entities, hooks, utils, infrastructure, tests, unit AI-Lifecycle: project AI-Memory-Id: 345fe99b
|
You have reached your Codex usage limits for code reviews. You can see your limits in the Codex usage dashboard. |
📝 WalkthroughWalkthroughAdds optional LLM-based enrichment to the commit message handler: when enabled and an LLM client is available, the handler attempts timed enrichment using staged diffs, falls back to existing heuristic analysis on failure, introduces an AI-Source trailer key, updates DI wiring and default config, and adds tests covering enrichment and fallback paths. Changes
Sequence DiagramsequenceDiagram
participant GitHook as Git Hook
participant CMH as CommitMsgHandler
participant Config as Config Loader
participant Analyzer as CommitAnalyzer
participant LLM as LLM Client
GitHook->>CMH: handle(message, stagedFiles, cwd)
CMH->>Config: load commit-msg config (enrich, enrichTimeout)
Config-->>CMH: ICommitMsgConfig
alt enrichment enabled && llm client present
CMH->>CMH: truncateDiff(stagedFiles) (MAX_DIFF_LENGTH)
CMH->>LLM: attemptEnrichment(message, diff) (bounded by enrichTimeout)
alt LLM returns facts
LLM-->>CMH: ILLMExtractedFact[]
CMH->>CMH: selectBestFact & buildEnrichedTrailers(facts, config, existingMessage)
CMH-->>GitHook: write enriched trailers (AI-Source, AI-Decision, AI-Tags, ...)
else LLM error / timeout / null
CMH->>Analyzer: analyze(message, config)
Analyzer-->>CMH: analysis
CMH->>CMH: buildTrailers(analysis, config, existingMessage) [heuristic]
CMH-->>GitHook: write heuristic trailers (AI-Source: heuristic, ...)
end
else enrichment disabled || no llm
CMH->>Analyzer: analyze(message, config)
Analyzer-->>CMH: analysis
CMH->>CMH: buildTrailers(analysis, config, existingMessage)
CMH-->>GitHook: write heuristic trailers
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Fix all issues with AI agents
In `@src/application/handlers/CommitMsgHandler.ts`:
- Around line 165-174: The timeout Promise currently creates a setTimeout that
is never cleared; capture the timer id when creating timeoutPromise (e.g., const
timeoutId = setTimeout(...)) and call clearTimeout(timeoutId) after Promise.race
when enrichmentPromise wins (i.e., when result !== null) so the timer is
cancelled; ensure this cleanup happens before returning the enrichment result
and keep references to the existing symbols enrichmentPromise, timeoutPromise,
timeoutMs and the surrounding logic in CommitMsgHandler.enrichCommit (or where
enrichmentPromise is created) to locate the change.
In `@tests/unit/application/handlers/CommitMsgHandler.test.ts`:
- Around line 74-82: The createMockLogger() mock in the tests does not implement
the full ILogger interface—specifically add no-op implementations for trace and
fatal and implement isLevelEnabled (returning a boolean, e.g., false or true
based on desired test behavior) so the mock satisfies ILogger; update the
createMockLogger() function used by the CommitMsgHandler tests to include these
methods (trace, fatal, isLevelEnabled) to prevent TypeScript or runtime errors
when the handler calls them.
- Around line 146-160: Tests create temporary repos with setupTestRepo and rely
on finally blocks to call cleanupTestRepo, which can leave stale testDir between
it() blocks; change the test file to register an afterEach hook that calls
cleanupTestRepo (and ensure testDir is reset there) instead of relying on
finally in individual tests, and remove redundant cleanup calls inside tests so
cleanupTestRepo and testDir are run/cleared automatically after each spec.
| let testDir: string; | ||
|
|
||
| function setupTestRepo(): string { | ||
| testDir = mkdtempSync(join(tmpdir(), 'git-mem-test-')); | ||
| execSync('git init', { cwd: testDir, stdio: 'pipe' }); | ||
| execSync('git config user.email "test@test.com"', { cwd: testDir, stdio: 'pipe' }); | ||
| execSync('git config user.name "Test"', { cwd: testDir, stdio: 'pipe' }); | ||
| return testDir; | ||
| } | ||
|
|
||
| function cleanupTestRepo(): void { | ||
| if (testDir) { | ||
| rmSync(testDir, { recursive: true, force: true }); | ||
| } | ||
| } |
There was a problem hiding this comment.
🧹 Nitpick | 🔵 Trivial
Test cleanup should use afterEach for reliable cleanup.
cleanupTestRepo() is called in finally blocks, but if a test file has multiple it() blocks sharing state, a failure mid-test could leave stale directories. Consider using afterEach hook for consistent cleanup.
♻️ Suggested improvement using afterEach
+import { describe, it, beforeEach, afterEach, mock } from 'node:test';
...
let testDir: string;
function setupTestRepo(): string {
testDir = mkdtempSync(join(tmpdir(), 'git-mem-test-'));
execSync('git init', { cwd: testDir, stdio: 'pipe' });
execSync('git config user.email "test@test.com"', { cwd: testDir, stdio: 'pipe' });
execSync('git config user.name "Test"', { cwd: testDir, stdio: 'pipe' });
return testDir;
}
-function cleanupTestRepo(): void {
- if (testDir) {
- rmSync(testDir, { recursive: true, force: true });
- }
-}
+// Then inside describe blocks:
+describe('CommitMsgHandler', () => {
+ afterEach(() => {
+ if (testDir) {
+ rmSync(testDir, { recursive: true, force: true });
+ testDir = '';
+ }
+ });
+ // ... tests without try/finally cleanup
+});🤖 Prompt for AI Agents
In `@tests/unit/application/handlers/CommitMsgHandler.test.ts` around lines 146 -
160, Tests create temporary repos with setupTestRepo and rely on finally blocks
to call cleanupTestRepo, which can leave stale testDir between it() blocks;
change the test file to register an afterEach hook that calls cleanupTestRepo
(and ensure testDir is reset there) instead of relying on finally in individual
tests, and remove redundant cleanup calls inside tests so cleanupTestRepo and
testDir are run/cleared automatically after each spec.
There was a problem hiding this comment.
Acknowledged - keeping try/finally for now as it works reliably. Can refactor to afterEach in a follow-up if needed.
There was a problem hiding this comment.
@TonyCasey, sounds good! The try/finally approach is solid for now. Feel free to revisit whenever you get a chance. 👍
🧠 Learnings used
Learnt from: CR
Repo: TonyCasey/git-mem PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-14T09:43:48.659Z
Learning: Applies to tests/integration/**/*.test.ts : Write integration tests that create real temporary git repos in os.tmpdir(), run actual git commands, and clean up in after() hooks in tests/integration/
Learnt from: TonyCasey
Repo: TonyCasey/lisa PR: 166
File: tests/unit/src/lib/infrastructure/services/PreferenceStore.test.ts:15-285
Timestamp: 2026-02-01T13:40:07.340Z
Learning: In tests/unit/**/PreferenceStore.test.ts and similar file-based infrastructure service tests: File-based services (PreferenceStore, SessionCaptureService, CronService, etc.) use real filesystem operations with unique temp directories (fs.mkdtempSync) and cleanup in afterEach. This is the established pattern in the codebase for testing services whose purpose is file I/O. Tests remain fast (~80ms), isolated, and deterministic.
Learnt from: CR
Repo: TonyCasey/lisa PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-01-30T20:19:52.042Z
Learning: Applies to tests/**/*.{ts,tsx,js} : Unit tests must be fast, isolated, and mock all dependencies. Integration tests should use real backends and test contracts.
Learnt from: TonyCasey
Repo: TonyCasey/git-mem PR: 49
File: src/hooks/post-commit.ts:25-34
Timestamp: 2026-02-14T10:03:02.254Z
Learning: In src/hooks/post-commit.ts, the post-commit hook script intentionally suppresses stderr with `2>/dev/null` to keep commits fast and non-intrusive; users can debug by running `git-mem hook post-commit` manually with `GIT_MEM_LOG_LEVEL=debug`.
Learnt from: CR
Repo: TonyCasey/lisa PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-06T11:57:30.539Z
Learning: Applies to tests/**/*.test.ts : Follow Jest testing patterns and fixtures as defined in .lisa/rules/testing.md for TypeScript tests
Learnt from: CR
Repo: TonyCasey/git-mem PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-14T09:43:48.659Z
Learning: Applies to tests/unit/**/*.test.ts : Write unit tests that mock dependencies manually without testing frameworks in tests/unit/
Learnt from: CR
Repo: TonyCasey/lisa PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-06T11:57:30.539Z
Learning: Applies to tests/**/*.test.ts : Follow testing pyramid and mocking strategies as defined in .lisa/rules/testing-principles.md
Learnt from: CR
Repo: TonyCasey/git-mem PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-02-14T09:43:48.659Z
Learning: Applies to .git-mem.json : Use .git-mem.json at project root for hook configuration, which is loaded by src/hooks/utils/config.ts and must never throw
Learnt from: TonyCasey
Repo: TonyCasey/lisa PR: 177
File: tests/unit/src/lib/infrastructure/services/NlCurationService.test.ts:137-681
Timestamp: 2026-02-01T22:49:49.668Z
Learning: In tests/unit/**/*.test.ts: The established project-wide test naming convention uses the "should ..." pattern (e.g., "should return correct value when condition met") rather than "method_givenCondition_shouldExpectedOutcome" pattern. New tests should maintain consistency with this convention.
Learnt from: CR
Repo: TonyCasey/lisa PR: 0
File: AGENTS.md:0-0
Timestamp: 2026-02-01T15:42:20.681Z
Learning: Applies to tests/**/*.{test,spec}.{ts,tsx,js} : Test files must follow the Arrange-Act-Assert pattern and have meaningful names like `method_givenCondition_shouldExpectedOutcome`.
Learnt from: TonyCasey
Repo: TonyCasey/lisa PR: 27
File: tests/unit/src/lib/infrastructure/services/MemoryService.timeout.test.ts:136-223
Timestamp: 2026-01-23T15:19:20.556Z
Learning: In tests/unit/**/MemoryService.timeout.test.ts and similar timeout tests: TonyCasey prefers using real timers with conservative thresholds (e.g., 500ms for operations expected to complete in <50ms) to test actual timeout behavior, rather than fake timers. This approach is intentional to validate production-like conditions, with a plan to adjust if CI flakiness occurs.
Learnt from: CR
Repo: TonyCasey/agenisys PR: 0
File: apps/n8n-workflows/mcp/CLAUDE.md:0-0
Timestamp: 2025-12-09T09:35:06.992Z
Learning: Applies to apps/n8n-workflows/mcp/**/mcp-test/**/*.{ts,js} : Validate AI-generated content using AI content indicators such as 'strategy', 'analysis', 'recommendations', 'optimization'
There was a problem hiding this comment.
Pull request overview
This PR adds optional LLM enrichment to the commit-msg hook, enabling richer AI-* trailer generation based on staged diff analysis. The feature gracefully falls back to existing heuristic analysis when LLM enrichment is unavailable or fails.
Changes:
- Adds
enrich(boolean) andenrichTimeout(number) configuration options to commit-msg hook config with defaults oftrueand5000ms - Implements LLM enrichment flow in
CommitMsgHandlerwith timeout-based fallback to heuristic analysis - Introduces new
AI-Sourcetrailer to distinguish between 'llm-enrichment' and 'heuristic' analysis paths
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
| src/domain/interfaces/IHookConfig.ts | Adds enrich and enrichTimeout fields to ICommitMsgConfig interface |
| src/domain/entities/ITrailer.ts | Adds AI-Source trailer key to track analysis source |
| src/hooks/utils/config.ts | Sets default values for new config fields (enrich: true, enrichTimeout: 5000ms) |
| src/infrastructure/di/container.ts | Modifies llmClient registration to always create when API key available (removes enrich option check) |
| src/application/handlers/CommitMsgHandler.ts | Implements LLM enrichment logic with attemptEnrichment, buildEnrichedTrailers, and selectBestFact methods |
| tests/unit/application/handlers/CommitMsgHandler.test.ts | Adds comprehensive unit tests for enrichment scenarios (13 new tests) |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| return bus; | ||
| }).singleton(), | ||
|
|
||
| llmClient: asFunction(() => { |
There was a problem hiding this comment.
This change breaks the existing container behavior and will cause test failures. The existing tests in tests/unit/infrastructure/di/container.test.ts (lines 119-133) expect that when createContainer({ enrich: false }) is called, the llmClient should be null.
Removing the options?.enrich check means the LLM client will always be created when ANTHROPIC_API_KEY is available, regardless of the enrich option. This is a breaking change to the container API.
If the intention is to always create the LLM client and control usage via the handler's config, then the IContainerOptions.enrich field should be removed and the container tests should be updated accordingly. Otherwise, restore the original conditional logic.
| llmClient: asFunction(() => { | |
| llmClient: asFunction(() => { | |
| // Preserve existing container behavior: when `enrich` is explicitly | |
| // set to false, do not create an LLM client and expose `null` instead. | |
| if (options && options.enrich === false) { | |
| return null; | |
| } |
There was a problem hiding this comment.
Fixed in 2d50f69 - updated to only skip LLM client creation when enrich is explicitly false. When not specified, it now attempts to create the client (allowing hooks to use enrichment without explicit opt-in). Updated test to reflect this behavior.
| // Merge tags from all facts | ||
| const allTags = new Set<string>(); | ||
| for (const fact of facts) { | ||
| for (const tag of fact.tags) { | ||
| allTags.add(tag); | ||
| } | ||
| } | ||
| if (allTags.size > 0) { | ||
| trailers.push({ key: AI_TRAILER_KEYS.TAGS, value: [...allTags].join(', ') }); | ||
| } |
There was a problem hiding this comment.
The buildEnrichedTrailers method doesn't respect the config.inferTags setting, unlike buildTrailers (line 382). When using LLM enrichment, tags are always included if available, even if inferTags is disabled in the config.
For consistency, this should check config.inferTags before adding tags, similar to how the heuristic path handles it. If the intent is that LLM-provided tags should always be included regardless of the config, this should be documented in a comment.
| // Merge tags from all facts | |
| const allTags = new Set<string>(); | |
| for (const fact of facts) { | |
| for (const tag of fact.tags) { | |
| allTags.add(tag); | |
| } | |
| } | |
| if (allTags.size > 0) { | |
| trailers.push({ key: AI_TRAILER_KEYS.TAGS, value: [...allTags].join(', ') }); | |
| } | |
| // Merge tags from all facts (respect inferTags setting) | |
| if (config.inferTags) { | |
| const allTags = new Set<string>(); | |
| for (const fact of facts) { | |
| for (const tag of fact.tags) { | |
| allTags.add(tag); | |
| } | |
| } | |
| if (allTags.size > 0) { | |
| trailers.push({ key: AI_TRAILER_KEYS.TAGS, value: [...allTags].join(', ') }); | |
| } | |
| } |
There was a problem hiding this comment.
Fixed in 2d50f69 - buildEnrichedTrailers now respects config.inferTags. Also added test for this scenario.
| // Race enrichment against timeout | ||
| const enrichmentPromise = this.llmClient.enrichCommit({ | ||
| sha: 'staged', // No SHA yet - use placeholder | ||
| subject, | ||
| body, | ||
| diff: truncatedDiff, | ||
| filesChanged: stagedFiles, | ||
| }); | ||
|
|
||
| const timeoutPromise = new Promise<null>((resolve) => { | ||
| setTimeout(() => resolve(null), timeoutMs); | ||
| }); | ||
|
|
||
| const result = await Promise.race([enrichmentPromise, timeoutPromise]); |
There was a problem hiding this comment.
When the timeout is reached, the Promise.race completes, but the LLM API call continues running in the background. This can lead to:
- Wasted API credits if the call eventually succeeds
- Potential resource leaks in the HTTP client
- Unhandled promise rejections if the LLM call fails after timeout
Consider using AbortController/AbortSignal to properly cancel the LLM request when the timeout is reached. The enrichCommit interface would need to accept an optional signal parameter for this to work properly.
| // Race enrichment against timeout | |
| const enrichmentPromise = this.llmClient.enrichCommit({ | |
| sha: 'staged', // No SHA yet - use placeholder | |
| subject, | |
| body, | |
| diff: truncatedDiff, | |
| filesChanged: stagedFiles, | |
| }); | |
| const timeoutPromise = new Promise<null>((resolve) => { | |
| setTimeout(() => resolve(null), timeoutMs); | |
| }); | |
| const result = await Promise.race([enrichmentPromise, timeoutPromise]); | |
| // Race enrichment against timeout, with abort support | |
| const abortController = new AbortController(); | |
| const { signal } = abortController; | |
| const enrichmentPromise = (this.llmClient.enrichCommit as any)({ | |
| sha: 'staged', // No SHA yet - use placeholder | |
| subject, | |
| body, | |
| diff: truncatedDiff, | |
| filesChanged: stagedFiles, | |
| signal, | |
| }).catch((err: any) => { | |
| // Treat abort-related errors as a timed-out enrichment | |
| if (err && (err.name === 'AbortError' || err.code === 'ABORT_ERR')) { | |
| return null; | |
| } | |
| throw err; | |
| }); | |
| let timeoutId: ReturnType<typeof setTimeout>; | |
| const timeoutPromise = new Promise<null>((resolve) => { | |
| timeoutId = setTimeout(() => { | |
| // Abort the LLM request when the timeout is reached | |
| abortController.abort(); | |
| resolve(null); | |
| }, timeoutMs); | |
| }); | |
| const result = await Promise.race([enrichmentPromise, timeoutPromise]); | |
| if (timeoutId) { | |
| clearTimeout(timeoutId); | |
| } |
There was a problem hiding this comment.
Good point about AbortController. For now, clearing the timeout is the minimal fix. The LLM call will complete in the background but won't affect the commit. Can add abort support in a follow-up if API costs become a concern.
| verified: 4, | ||
| high: 3, | ||
| medium: 2, | ||
| low: 1, |
There was a problem hiding this comment.
The selectBestFact method doesn't handle the 'uncertain' confidence level in the confidencePriority map, but it exists in the ConfidenceLevel type ('verified' | 'high' | 'medium' | 'low' | 'uncertain'). While the current code uses a default of 0 for unknown values, it would be more explicit and maintainable to include 'uncertain' with a priority value (likely 0 or -1) in the map.
| low: 1, | |
| low: 1, | |
| uncertain: 0, |
There was a problem hiding this comment.
Fixed in 2d50f69 - added 'uncertain: 0' to the confidencePriority map.
| const hasAgent = existingMessage.includes('AI-Agent:'); | ||
| const hasModel = existingMessage.includes('AI-Model:'); |
There was a problem hiding this comment.
When checking for existing trailers with existingMessage.includes('AI-Agent:'), this could produce false positives if 'AI-Agent:' appears in the commit message body itself (e.g., in a quoted text or discussion about the AI-Agent trailer).
Consider using a more robust check that specifically looks for the trailer in the git trailer block, or use the git CLI to parse existing trailers. This same issue affects the AI-Model check on line 263 and similar checks in buildTrailers (lines 352-353).
There was a problem hiding this comment.
Acknowledged - the simple string check is a pragmatic trade-off. False positives are unlikely in practice since 'AI-Agent:' is an unusual string to include in commit message bodies. Can revisit if it becomes an issue.
| // Merge tags from all facts | ||
| const allTags = new Set<string>(); | ||
| for (const fact of facts) { | ||
| for (const tag of fact.tags) { | ||
| allTags.add(tag); | ||
| } | ||
| } | ||
| if (allTags.size > 0) { | ||
| trailers.push({ key: AI_TRAILER_KEYS.TAGS, value: [...allTags].join(', ') }); | ||
| } |
There was a problem hiding this comment.
There's no test coverage for the scenario where inferTags: false is set in the config when using LLM enrichment. Given that buildEnrichedTrailers doesn't respect this config flag (unlike buildTrailers), a test should be added to verify the expected behavior and document whether this is intentional.
Add a test case that sets enrich: true, inferTags: false and verifies whether tags should still be included from LLM results or not.
There was a problem hiding this comment.
Fixed in 2d50f69 - added test 'should not include tags when inferTags is false' that verifies LLM enrichment respects the inferTags config.
- Clear timeout timer when LLM enrichment completes - Add 'uncertain' to confidence priority map - Respect inferTags config in buildEnrichedTrailers - Fix mock logger to include all ILogger methods - Update container to attempt LLM client creation by default - Add test for inferTags:false with LLM enrichment Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> AI-Agent: Claude-Code/2.1.42 AI-Model: claude-opus-4-5-20251101 AI-Gotcha: address PR review feedback (GIT-87). - Clear timeout timer when LLM enrichment completes AI-Confidence: medium AI-Tags: application, handlers, infrastructure, tests, unit AI-Lifecycle: project AI-Memory-Id: 5a79a51f
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tests/unit/application/handlers/CommitMsgHandler.test.ts`:
- Line 8: The import list in CommitMsgHandler.test.ts includes an unused symbol
`mock` from 'node:test'; remove `mock` from the import statement (the import
currently: "import { describe, it, beforeEach, mock } from 'node:test';") so it
only imports the used symbols (describe, it, beforeEach), or alternatively use
`mock` in the test if it was intended—update the import accordingly in the test
file.
Remove unused `mock` import from node:test in CommitMsgHandler tests. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> AI-Agent: Claude-Code/2.1.42 AI-Model: claude-opus-4-5-20251101 AI-Fact: remove unused mock import (GIT-87). Remove unused `mock` import from node:test in CommitMsgHandler tests. AI-Confidence: low AI-Tags: tests, unit, application, handlers AI-Lifecycle: project AI-Memory-Id: 451c45dd
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 7 out of 7 changed files in this pull request and generated 5 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // Parse commit message for LLM input | ||
| const lines = message.split('\n'); | ||
| const subject = lines[0] || ''; | ||
| const body = lines.slice(2).join('\n').trim(); // Skip blank line after subject |
There was a problem hiding this comment.
Commit message body parsing for enrichment always does lines.slice(2) (assuming a blank line after the subject). If a commit message has no blank separator line, the first body line will be dropped. Consider using the same "first blank line" logic as CommitAnalyzer.parseConventionalCommit() (or reusing that method) to derive subject/body reliably.
| const body = lines.slice(2).join('\n').trim(); // Skip blank line after subject | |
| let body = ''; | |
| if (lines.length > 1) { | |
| const rest = lines.slice(1); | |
| const firstBlankIdx = rest.findIndex((line) => line.trim() === ''); | |
| const bodyLines = | |
| firstBlankIdx === -1 ? rest : rest.slice(firstBlankIdx + 1); | |
| body = bodyLines.join('\n').trim(); | |
| } |
| const trailerKey = MEMORY_TYPE_TO_TRAILER_KEY[bestFact.type]; | ||
| if (trailerKey) { | ||
| const truncatedContent = bestFact.content.slice(0, 200); | ||
| trailers.push({ key: trailerKey, value: truncatedContent }); | ||
| } |
There was a problem hiding this comment.
LLM-provided bestFact.content is written directly into a git trailer value. Since LLM output can contain newlines or other control characters, this can make git interpret-trailers fail or produce malformed commit messages. Sanitize trailer values to a single line (e.g., collapse whitespace / strip \r/\n) before appending.
| // Merge tags from all facts (respect inferTags setting) | ||
| if (config.inferTags) { | ||
| const allTags = new Set<string>(); | ||
| for (const fact of facts) { | ||
| for (const tag of fact.tags) { | ||
| allTags.add(tag); | ||
| } | ||
| } | ||
| if (allTags.size > 0) { | ||
| trailers.push({ key: AI_TRAILER_KEYS.TAGS, value: [...allTags].join(', ') }); | ||
| } |
There was a problem hiding this comment.
Similarly, LLM-provided tags are merged and emitted without normalization/sanitization. Tags containing newlines/commas or leading/trailing whitespace can break the AI-Tags trailer format. Consider trimming each tag and filtering out empty/unsafe values before joining.
| /** | ||
| * Select the best fact from LLM results. | ||
| * Priority: decision > gotcha > convention > fact | ||
| * Within same type: verified > high > medium > low |
There was a problem hiding this comment.
The doc comment says confidence priority is verified > high > medium > low, but the implementation also handles uncertain (and will treat unknowns as 0). Update the comment to match the actual ordering (including uncertain) to avoid confusion when maintaining the ranking logic.
| * Within same type: verified > high > medium > low | |
| * Within same type: verified > high > medium > low > uncertain (unknown treated as lowest/0) |
| * and fallback to heuristic analysis. | ||
| */ | ||
|
|
||
| import { describe, it, beforeEach } from 'node:test'; |
There was a problem hiding this comment.
beforeEach is imported from node:test but never used in this file, which will fail the repo's @typescript-eslint/no-unused-vars rule. Remove the unused import (or add a beforeEach if it was intended).
| import { describe, it, beforeEach } from 'node:test'; | |
| import { describe, it } from 'node:test'; |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@tests/unit/application/handlers/CommitMsgHandler.test.ts`:
- Around line 118-145: The timeout branch in createMockLLMClient.enrichCommit
uses a 10s setTimeout which leaves a long-running timer; change it to a short
delay (e.g., 20–100ms) and unref the timer so it won't keep the Node process
alive (create the timer via const t = setTimeout(...); t.unref(); or call
setTimeout(...).unref() if supported) and resolve the promise after that short
delay to simulate a slow response without leaving open handles.
| function createMockLLMClient( | ||
| result?: Partial<ILLMEnrichmentResult> | 'error' | 'timeout', | ||
| ): ILLMClient { | ||
| return { | ||
| enrichCommit: async () => { | ||
| if (result === 'error') { | ||
| throw new Error('LLM API error'); | ||
| } | ||
| if (result === 'timeout') { | ||
| // Simulate a very slow response (will be raced against timeout) | ||
| await new Promise((resolve) => setTimeout(resolve, 10000)); | ||
| return { facts: [], usage: { inputTokens: 0, outputTokens: 0 } }; | ||
| } | ||
| return { | ||
| facts: [ | ||
| { | ||
| content: 'Using JWT for stateless authentication', | ||
| type: 'decision', | ||
| confidence: 'high', | ||
| tags: ['auth', 'jwt'], | ||
| }, | ||
| ], | ||
| usage: { inputTokens: 100, outputTokens: 50 }, | ||
| ...result, | ||
| } as ILLMEnrichmentResult; | ||
| }, | ||
| }; | ||
| } |
There was a problem hiding this comment.
Timeout mock uses a 10s timer; keep tests fast and avoid open handles.
The 10s delay will keep a live timer after the timeout race resolves, slowing the suite and risking CI timeouts. Prefer a short delay and unref the timer.
💡 Suggested fix
if (result === 'timeout') {
// Simulate a very slow response (will be raced against timeout)
- await new Promise((resolve) => setTimeout(resolve, 10000));
+ await new Promise((resolve) => {
+ const t = setTimeout(resolve, 200);
+ t.unref?.();
+ });
return { facts: [], usage: { inputTokens: 0, outputTokens: 0 } };
}🤖 Prompt for AI Agents
In `@tests/unit/application/handlers/CommitMsgHandler.test.ts` around lines 118 -
145, The timeout branch in createMockLLMClient.enrichCommit uses a 10s
setTimeout which leaves a long-running timer; change it to a short delay (e.g.,
20–100ms) and unref the timer so it won't keep the Node process alive (create
the timer via const t = setTimeout(...); t.unref(); or call
setTimeout(...).unref() if supported) and resolve the promise after that short
delay to simulate a slow response without leaving open handles.
Summary
Changes
ICommitMsgConfig: Addenrich: boolean(default: true) andenrichTimeout: number(default: 5000)ITrailer: AddAI-Sourcetrailer keyCommitMsgHandler: ImplementattemptEnrichment(),buildEnrichedTrailers(),selectBestFact()llmClientto handlerFallback Behavior
Falls back to heuristic analysis when:
ANTHROPIC_API_KEY(llmClient is null)enrich: falsein configTest plan
Closes GIT-87
🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Configuration
Tests