Skip to content

test: comprehensive coverage improvements across 50+ packages#318

Merged
rsnodgrass merged 9 commits into
mainfrom
ryan/test-coverage-v2
Mar 27, 2026
Merged

test: comprehensive coverage improvements across 50+ packages#318
rsnodgrass merged 9 commits into
mainfrom
ryan/test-coverage-v2

Conversation

@rsnodgrass
Copy link
Copy Markdown
Contributor

@rsnodgrass rsnodgrass commented Mar 26, 2026

Summary

  • Adds ~13k+ lines of meaningful test coverage across 70+ test files (50+ packages)
  • Coverage improved from 57.2% → 59.3% overall (60%+ when daemon tests pass in CI)
  • Tests focus on real bugs, edge cases, and failure modes — no test theater
  • Major coverage gains in key packages with real behavioral tests

Coverage Improvements

Package Before After Change
Total 57.2% 59.3% +2.1%
cmd/ox 34.6% 38.9% +4.3%
internal/doctor 59.0% 72.5% +13.5%
internal/gitserver 66.9% 73.2% +6.3%
internal/telemetry 72.5% 90.7% +18.2%
internal/config 73.7% 76.5% +2.8%
internal/whisper/store 75.3% 77.8% +2.5%
internal/session 77.3% 78.5% +1.2%
internal/ui 0.0% ~80% new

Test plan

  • go vet ./cmd/ox/ && go vet ./internal/... — all clean
  • go test ./cmd/ox/ -count=1 — passes
  • go test ./internal/... -count=1 — passes (1 pre-existing flaky agentinstance test)
  • All new test functions pass individually
  • CI pipeline validates on merge

Summary by CodeRabbit

  • Tests
    • Expanded and refreshed test coverage across CLI, agent hooks, session handling, doctor/status checks, daemon, gitserver, telemetry, config, UI rendering, and persistence to improve stability and prevent regressions. Several legacy test suites were removed and replaced with reorganized, more comprehensive test suites.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Mar 26, 2026

Caution

Review failed

The pull request is closed.

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0e0c43e3-dbc9-4f1e-bcac-33f9fefa259f

📥 Commits

Reviewing files that changed from the base of the PR and between 9489d6c and f5169fa.

📒 Files selected for processing (8)
  • cmd/ox/config_settings_coverage_test.go
  • cmd/ox/config_tui_coverage_test.go
  • cmd/ox/session_helpers_coverage_test.go
  • cmd/ox/session_show_coverage_test.go
  • internal/config/github_sync_test.go
  • internal/daemon/status_display_test.go
  • internal/session/markdown_coverage_test.go
  • internal/whisper/store/store_coverage_test.go

📝 Walkthrough

Walkthrough

Adds many new unit tests across cmd/ox and internal packages, deletes several obsolete test files, and reorganizes test coverage; no production API signatures were changed.

Changes

Cohort / File(s) Summary
Agent tests
cmd/ox/agent_capture_prior_coverage_test.go, cmd/ox/agent_doctor_coverage_test.go, cmd/ox/agent_hook_coverage_test.go, cmd/ox/agent_instances_coverage_test.go, cmd/ox/agent_prime_coverage_test.go, cmd/ox/agent_query_coverage_test.go, cmd/ox/agent_session_coverage_test.go, cmd/ox/agent_session_plan_history_coverage_test.go
Added extensive unit tests for argument parsing, phase resolution/dispatch, guidance discovery, path shortening, time formatting, session parsing, plan extraction, and related helpers.
Doctor tests
cmd/ox/doctor_auth_coverage_test.go, cmd/ox/doctor_display_test.go, cmd/ox/doctor_extra_coverage_test.go, cmd/ox/doctor_git_fix_coverage_test.go, cmd/ox/doctor_header_coverage_test.go, cmd/ox/doctor_hooks_coverage_test.go, cmd/ox/doctor_registry_coverage_test.go, cmd/ox/doctor_session_commit_coverage_test.go, cmd/ox/doctor_session_incomplete_coverage_test.go, cmd/ox/doctor_session_upload_retry_coverage_test.go
Added and expanded tests for check constructors/results, aggregation, git-fix helpers, header rendering, hook validation, registry behavior, session commit messages, incomplete-session reporting, and upload helpers; one display test assertion was relaxed.
Doctor test removals
cmd/ox/doctor_pure_logic_test.go, cmd/ox/doctor_types_test.go
Removed legacy doctor test suites (coverage consolidated into new doctor tests).
Hook integrations
cmd/ox/hooks_agent_coverage_test.go, cmd/ox/hooks_claude_coverage_test.go, cmd/ox/hooks_codepuppy_coverage_test.go, cmd/ox/hooks_gemini_coverage_test.go, cmd/ox/hooks_opencode_coverage_test.go, cmd/ox/hooks_user_marker_coverage_test.go
New tests for hook command generation, settings path helpers, install/uninstall lifecycles, template integrity, and hook listing across integrations.
Session tests
cmd/ox/session_commit_coverage_test.go, cmd/ox/session_helpers_coverage_test.go, cmd/ox/session_lint_all_coverage_test.go, cmd/ox/session_lint_coverage_test.go, cmd/ox/session_list_coverage_test.go, cmd/ox/session_regenerate_coverage_test.go, cmd/ox/session_show_coverage_test.go, cmd/ox/session_status_coverage_test.go, cmd/ox/session_plan_history_coverage_test.go
Added tests for session ID extraction, commit message building, raw JSONL linting/validation, artifact regeneration, display/formatting, metadata handling, status/liveness, plan-history extraction and helpers.
Config & status
cmd/ox/banner_coverage_test.go, cmd/ox/config_get_coverage_test.go, cmd/ox/config_settings_coverage_test.go, cmd/ox/config_tui_coverage_test.go, cmd/ox/status_coverage_test.go, cmd/ox/status_extra_coverage_test.go, cmd/ox/status_render_coverage_test.go
Added tests for banner/time helpers, config settings and TUI formatting, status rendering, visibility/format utilities, and related edge cases.
Utilities & CLI
cmd/ox/distill_coverage_test.go, cmd/ox/friction_coverage_test.go, cmd/ox/heartbeat_coverage_test.go, cmd/ox/import_coverage_test.go, cmd/ox/init_coverage_test.go, cmd/ox/init_helpers_coverage_test.go, cmd/ox/login_coverage_test.go, cmd/ox/prime_session_marker_coverage_test.go, cmd/ox/release_notes_coverage_test.go, cmd/ox/root_coverage_test.go, cmd/ox/sync_coverage_test.go, cmd/ox/teams_coverage_test.go, cmd/ox/url_fallback_coverage_test.go, cmd/ox/view_coverage_test.go
Added tests for distill logic, friction/catalog behavior, heartbeat metadata resolution, import/title/slug detection, init helpers, network error detection, session marker I/O, release-notes parsing/highlighting, path utilities, sync/git checks, teams formatting, and URL fallback/cache helpers.
Internal packages (new tests)
internal/cli/cli_coverage_test.go, internal/config/attribution_test.go, internal/config/github_sync_test.go, internal/daemon/daemon_coverage_test.go, internal/daemon/status_display_test.go, internal/doctor/doctor_coverage_test.go, internal/fileutil/atomic_coverage_test.go, internal/gitserver/gitserver_coverage_test.go, internal/gitserver/coverage_extra_test.go, internal/ledger/ledger_coverage_test.go, internal/session/errors_test.go, internal/session/markdown_coverage_test.go, internal/session/session_coverage_test.go, internal/telemetry/telemetry_coverage_test.go, internal/ui/styles_coverage_test.go, internal/whisper/store/store_coverage_test.go
Added comprehensive test suites for CLI helpers, attribution, GitHub-sync normalization, daemon heartbeat/store/error persistence, doctor checks and runners, atomic JSON writes, gitserver URL/auth utilities and credential flows, ledger GitHub persistence, session errors/markdown/helpers, telemetry queue/sending, UI renderers, and whisper store behaviors.
Internal test removals
internal/daemon/coverage_test.go, internal/gitserver/checkout_extra_test.go, internal/gitserver/remote_extra_test.go
Removed older internal test suites replaced or consolidated by the new internal test files.

Sequence Diagram(s)

(omitted — changes are test additions/removals and do not introduce new multi-component runtime control flow)

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

sageox

Suggested reviewers

  • galexy

Poem

🐰 I hopped through tests both wide and deep,
Added checks so bugs can't creep,
From hooks to daemons, sessions bright,
I nibbled edges, set assertions right,
Now CI dreams a steadier sleep.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch ryan/test-coverage-v2

@rsnodgrass rsnodgrass force-pushed the ryan/test-coverage-v2 branch from 7ad1c6f to ad7b94e Compare March 26, 2026 21:13
@rsnodgrass rsnodgrass force-pushed the ryan/test-coverage-v2 branch from 99a825c to 93e1d62 Compare March 26, 2026 22:01
@rsnodgrass rsnodgrass marked this pull request as ready for review March 27, 2026 01:29
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

🟡 Minor comments (22)
cmd/ox/agent_prime_coverage_test.go-160-162 (1)

160-162: ⚠️ Potential issue | 🟡 Minor

Check setup call errors in test fixtures.

These os.WriteFile / os.MkdirAll calls ignore errors, so setup failures can silently skew assertions. Please fail fast on setup errors.

Suggested fix
-		os.WriteFile(filepath.Join(dir, "notes.txt"), []byte("text"), 0644)
-		os.WriteFile(filepath.Join(dir, "data.json"), []byte("{}"), 0644)
+		if err := os.WriteFile(filepath.Join(dir, "notes.txt"), []byte("text"), 0644); err != nil {
+			t.Fatal(err)
+		}
+		if err := os.WriteFile(filepath.Join(dir, "data.json"), []byte("{}"), 0644); err != nil {
+			t.Fatal(err)
+		}
...
-		os.WriteFile(filepath.Join(dir, "2024-01-01.md"), []byte("jan"), 0644)
-		os.WriteFile(filepath.Join(dir, "2024-03-15.md"), []byte("mar"), 0644)
-		os.WriteFile(filepath.Join(dir, "2024-02-10.md"), []byte("feb"), 0644)
+		if err := os.WriteFile(filepath.Join(dir, "2024-01-01.md"), []byte("jan"), 0644); err != nil {
+			t.Fatal(err)
+		}
+		if err := os.WriteFile(filepath.Join(dir, "2024-03-15.md"), []byte("mar"), 0644); err != nil {
+			t.Fatal(err)
+		}
+		if err := os.WriteFile(filepath.Join(dir, "2024-02-10.md"), []byte("feb"), 0644); err != nil {
+			t.Fatal(err)
+		}
...
-		os.WriteFile(filepath.Join(dir, "note.md"), []byte("x"), 0644)
-		os.MkdirAll(filepath.Join(dir, "subdir.md"), 0755) // directory with .md name
+		if err := os.WriteFile(filepath.Join(dir, "note.md"), []byte("x"), 0644); err != nil {
+			t.Fatal(err)
+		}
+		if err := os.MkdirAll(filepath.Join(dir, "subdir.md"), 0755); err != nil { // directory with .md name
+			t.Fatal(err)
+		}

Also applies to: 171-173, 194-195

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/ox/agent_prime_coverage_test.go` around lines 160 - 162, The test setup
currently ignores errors from os.MkdirAll and os.WriteFile which can hide
fixture failures; update each setup call (the os.MkdirAll and os.WriteFile calls
surrounding the call to discoverMemoryFiles and the other occurrences around
lines 171-173 and 194-195) to capture the returned error and fail the test
immediately (e.g., if err != nil { t.Fatalf("setup failed: %v", err) } or use
require.NoError(t, err)) so any filesystem/setup error surfaces instead of
silently skewing assertions for discoverMemoryFiles.
cmd/ox/prime_session_marker_coverage_test.go-10-54 (1)

10-54: ⚠️ Potential issue | 🟡 Minor

wantSafe field is unused—sanitization is not actually verified.

The test struct defines wantSafe (line 13) but the assertion logic never uses it. The test only checks that markerPath() returns a non-empty string ending in .json, not that path traversal sequences (../, \) are actually sanitized.

To meaningfully test sanitization, verify the resulting path doesn't contain dangerous sequences:

Proposed fix to actually verify sanitization
 import (
+	"strings"
 	"testing"
 	"time"
 )

 func TestMarkerPath_Sanitization(t *testing.T) {
 	t.Parallel()
 	tests := []struct {
 		name      string
 		sessionID string
-		wantSafe  bool // path should not contain traversal sequences
 	}{
 		{
 			name:      "normal session ID",
 			sessionID: "abc123",
-			wantSafe:  true,
 		},
 		// ... other cases
 	}

 	for _, tt := range tests {
 		t.Run(tt.name, func(t *testing.T) {
 			t.Parallel()
 			path := markerPath(tt.sessionID)
 			if path == "" {
 				t.Error("expected non-empty path")
 			}
-			// path should end with .json
-			if len(path) < 5 || path[len(path)-5:] != ".json" {
-				t.Errorf("path should end with .json, got %q", path)
+			if !strings.HasSuffix(path, ".json") {
+				t.Errorf("path should end with .json, got %q", path)
+			}
+			// Verify path traversal sequences are sanitized
+			if strings.Contains(path, "..") {
+				t.Errorf("path contains traversal sequence '..': %q", path)
+			}
+			// Verify no raw slashes from input leak into filename portion
+			base := path[strings.LastIndex(path, string(os.PathSeparator))+1:]
+			if strings.ContainsAny(base, `/\`) {
+				t.Errorf("filename contains path separators: %q", base)
 			}
 		})
 	}
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/ox/prime_session_marker_coverage_test.go` around lines 10 - 54, The test
defines a wantSafe field but never asserts it; update the table-driven test for
markerPath to use tt.wantSafe by adding an assertion that when tt.wantSafe is
true the returned path is sanitized (e.g., does not contain "..", "/" or "\"
sequences or other traversal patterns) and when false allows expected unsafe
patterns; locate the test loop using markerPath and tt.wantSafe and add checks
(using strings.Contains or appropriate filepath helpers) after the existing
.json assertions so the test actually verifies sanitization.
internal/fileutil/atomic_coverage_test.go-18-19 (1)

18-19: ⚠️ Potential issue | 🟡 Minor

Use require.Error to prevent potential panic.

If AtomicWriteJSON unexpectedly returns nil, assert.Error marks the test as failed but continues execution. The subsequent err.Error() call would then panic. Use require.Error to fail fast.

Proposed fix
-	assert.Error(t, err, "expected error when parent dir doesn't exist")
-	assert.Contains(t, err.Error(), "create temp file")
+	require.Error(t, err, "expected error when parent dir doesn't exist")
+	require.Contains(t, err.Error(), "create temp file")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/fileutil/atomic_coverage_test.go` around lines 18 - 19, Change the
test to fail fast when AtomicWriteJSON returns nil by replacing assert.Error(t,
err, ...) with require.Error(t, err, ...) in atomic_coverage_test.go so the test
stops immediately and does not call err.Error() on a nil error; ensure the test
imports and uses the testify/require package and keep the subsequent
assert.Contains check only after require.Error guarantees err is non-nil.
cmd/ox/init_coverage_test.go-15-22 (1)

15-22: ⚠️ Potential issue | 🟡 Minor

Add a real error-path case to TestRelFromRoot.

Line 50 has an error branch, but every current case keeps wantErr false, so the failure path is untested.

Proposed test case addition
 	{
 		name:    "same directory",
 		gitRoot: "/repo",
 		cmdDir:  "/repo",
 		file:    "file.txt",
 		want:    "file.txt",
 	},
+	{
+		name:    "cmd dir outside git root",
+		gitRoot: "/repo",
+		cmdDir:  "/tmp/outside",
+		file:    "file.txt",
+		wantErr: true,
+	},

As per coding guidelines, **/*_test.go: "Use table-driven tests and test error paths in Go code".

Also applies to: 46-57

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/ox/init_coverage_test.go` around lines 15 - 22, Add a table-driven test
case in TestRelFromRoot that exercises the error branch by setting up inputs
that will trigger the failure (e.g., invalid or non-existent gitRoot/cmdDir
combination) and set wantErr to true; update the tests slice (struct fields:
name, gitRoot, cmdDir, file, want, wantErr) with a descriptive case name and
assert that the function returns an error when wantErr is true so the error path
at the branch around line 50 is covered.
cmd/ox/init_coverage_test.go-136-139 (1)

136-139: ⚠️ Potential issue | 🟡 Minor

Use a deterministic missing-path assertion instead of a hardcoded absolute path.

Line 138 uses "/nonexistent/path/file.txt", which is environment-dependent. Prefer a missing file under t.TempDir().

Proposed change
 	t.Run("nonexistent file", func(t *testing.T) {
 		t.Parallel()
-		assert.False(t, fileExists("/nonexistent/path/file.txt"))
+		dir := t.TempDir()
+		missing := filepath.Join(dir, "missing.txt")
+		assert.False(t, fileExists(missing))
 	})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/ox/init_coverage_test.go` around lines 136 - 139, The test uses a
hardcoded absolute path for the nonexistent-file case; change it to create a
deterministic missing path under t.TempDir() instead. In the t.Run("nonexistent
file", ...) subtest, call t.TempDir() to get a temp directory and construct a
non-existent filename (e.g., filepath.Join(tempDir, "file.txt")) and assert
fileExists on that path; update imports to use filepath if not present and keep
the test parallelization and assert.False(fileExists(...)) semantics.
cmd/ox/heartbeat_coverage_test.go-12-14 (1)

12-14: ⚠️ Potential issue | 🟡 Minor

t.Parallel() conflicts with environment variable mutation.

The test is marked parallel at line 13, but the subtests mutate process-global environment variables (SAGEOX_AGENT_ID, AGENT_ENV). This creates a race condition if other parallel tests also read or modify these variables. The comment on line 67 acknowledges env mutation but the parent test is still parallel.

Either remove t.Parallel() or use t.Setenv() which automatically prevents parallel execution and handles cleanup.

Proposed fix using t.Setenv()
 func TestResolveAgentMetadata(t *testing.T) {
-	t.Parallel()
-
 	tests := []struct {

And in the subtest loop:

 	for _, tt := range tests {
 		t.Run(tt.name, func(t *testing.T) {
-			// set env vars for this test (not parallel due to env mutation)
-			prevAgentID := os.Getenv("SAGEOX_AGENT_ID")
-			prevAgentEnv := os.Getenv("AGENT_ENV")
-			t.Cleanup(func() {
-				os.Setenv("SAGEOX_AGENT_ID", prevAgentID)
-				os.Setenv("AGENT_ENV", prevAgentEnv)
-			})
-
 			if tt.envAgentID != "" {
-				os.Setenv("SAGEOX_AGENT_ID", tt.envAgentID)
+				t.Setenv("SAGEOX_AGENT_ID", tt.envAgentID)
 			} else {
 				os.Unsetenv("SAGEOX_AGENT_ID")
 			}
 			if tt.envAgentType != "" {
-				os.Setenv("AGENT_ENV", tt.envAgentType)
+				t.Setenv("AGENT_ENV", tt.envAgentType)
 			} else {
 				os.Unsetenv("AGENT_ENV")
 			}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/ox/heartbeat_coverage_test.go` around lines 12 - 14, The test
TestResolveAgentMetadata is marked t.Parallel() but its subtests mutate
process-global environment variables (SAGEOX_AGENT_ID, AGENT_ENV), causing a
race; remove the t.Parallel() call at the top of TestResolveAgentMetadata OR
switch each subtest to use t.Setenv("SAGEOX_AGENT_ID", ...) and
t.Setenv("AGENT_ENV", ...) inside the subtest closure (and ensure you do not
call t.Parallel() on the parent) so environment changes are isolated and cleaned
up; locate the TestResolveAgentMetadata function and the subtest loop to apply
this change.
cmd/ox/agent_capture_prior_coverage_test.go-60-129 (1)

60-129: ⚠️ Potential issue | 🟡 Minor

Refactor formatter tests to table-driven form and add an explicit error-path case.

formatCapturePriorOutput returns (data, err), but this suite currently exercises success-only paths with ad-hoc subtests. Please convert this block to table-driven cases and include at least one failing/edge behavior case (for example, nil input behavior) to lock expected error semantics.

♻️ Suggested test shape
 func TestFormatCapturePriorOutput(t *testing.T) {
 	t.Parallel()
-
-	t.Run("basic result", func(t *testing.T) {
-		...
-	})
-
-	t.Run("minimal result", func(t *testing.T) {
-		...
-	})
+	tests := []struct {
+		name    string
+		input   *session.CaptureResult
+		wantErr bool
+		check   func(t *testing.T, out capturePriorOutput)
+	}{
+		{
+			name: "basic result",
+			input: &session.CaptureResult{
+				AgentID:         "agent-001",
+				Path:            "/tmp/sessions/session-001",
+				SessionName:     "session-001",
+				EntryCount:      5,
+				SecretsRedacted: 2,
+				Title:           "Planning discussion",
+			},
+			check: func(t *testing.T, out capturePriorOutput) {
+				if !out.Success { t.Error("expected Success = true") }
+				if out.Type != "session_capture_prior" { t.Errorf("Type = %q", out.Type) }
+			},
+		},
+		{
+			name: "minimal result",
+			input: &session.CaptureResult{
+				AgentID:    "agent-002",
+				Path:       "/tmp/s",
+				EntryCount: 0,
+			},
+			check: func(t *testing.T, out capturePriorOutput) {
+				if out.SessionName != "" { t.Errorf("SessionName = %q, want empty", out.SessionName) }
+			},
+		},
+		{
+			name:    "nil input behavior",
+			input:   nil,
+			wantErr: true, // or false, depending on intended contract
+		},
+	}
+
+	for _, tt := range tests {
+		tt := tt
+		t.Run(tt.name, func(t *testing.T) {
+			t.Parallel()
+			data, err := formatCapturePriorOutput(tt.input)
+			if (err != nil) != tt.wantErr {
+				t.Fatalf("err = %v, wantErr=%v", err, tt.wantErr)
+			}
+			if tt.wantErr {
+				return
+			}
+			var output capturePriorOutput
+			if err := json.Unmarshal(data, &output); err != nil {
+				t.Fatalf("failed to parse output: %v", err)
+			}
+			if tt.check != nil {
+				tt.check(t, output)
+			}
+		})
+	}
 }

As per coding guidelines, **/*_test.go: Use table-driven tests and test error paths in Go code.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/ox/agent_capture_prior_coverage_test.go` around lines 60 - 129,
TestFormatCapturePriorOutput currently uses ad-hoc subtests and only success
paths; refactor it into a table-driven test (keep the test name and t.Parallel
usage) by defining a slice of cases with fields like name, input
*session.CaptureResult, wantErr bool, and expected capturePriorOutput values,
iterate cases with t.Run, call formatCapturePriorOutput for each, assert error
presence matches wantErr, and when no error unmarshal into capturePriorOutput
and validate fields (Type, Success, AgentID, EntryCount, SecretsRedacted, Title,
SessionName). Add at least one explicit error-path case (e.g., input nil
expecting wantErr=true) to lock the error semantics; reference the existing
symbols TestFormatCapturePriorOutput, formatCapturePriorOutput,
session.CaptureResult, and capturePriorOutput to locate and update the test.
cmd/ox/release_notes_coverage_test.go-98-116 (1)

98-116: ⚠️ Potential issue | 🟡 Minor

Empty line test case makes no assertion.

The "empty line" test case at lines 98-101 sets contains: "", but the assertion at lines 113-115 only runs when tt.contains != "". This means the test executes but verifies nothing for empty input.

Consider asserting that empty input returns an empty string:

Proposed fix
 	for _, tt := range tests {
 		t.Run(tt.name, func(t *testing.T) {
 			t.Parallel()
 			got := renderLine(tt.line)
-			if tt.contains != "" {
-				assert.Contains(t, got, tt.contains)
+			if tt.line == "" {
+				assert.Empty(t, got)
+			} else {
+				assert.Contains(t, got, tt.contains)
 			}
 		})
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/ox/release_notes_coverage_test.go` around lines 98 - 116, The "empty
line" case currently sets tt.contains == "" so the existing conditional
assertion is skipped and nothing is verified; update the test loop in the t.Run
block (the loop that calls renderLine) to assert an explicit empty result for
that case by branching on tt.contains: if tt.contains != "" keep
assert.Contains(t, got, tt.contains) else assert.Equal(t, "", got). This uses
the existing renderLine call and tt.contains sentinel to ensure the empty-line
test actually verifies an empty string.
cmd/ox/release_notes_coverage_test.go-176-194 (1)

176-194: ⚠️ Potential issue | 🟡 Minor

Empty string test case makes no assertions.

The "empty string" test case (lines 176-179) sets both containsBoldContent: "" and leaves notContains unset. Since both assertion blocks check for non-empty values before running (lines 186-193), this test case executes without verifying any behavior.

Proposed fix
 		{
 			name:                "empty string",
 			input:               "",
-			containsBoldContent: "",
+			containsBoldContent: "", // handled specially below
 		},

And update the assertion logic:

 		t.Run(tt.name, func(t *testing.T) {
 			t.Parallel()
 			got := highlightBold(tt.input)
-			if tt.containsBoldContent != "" {
+			if tt.input == "" {
+				assert.Empty(t, got)
+			} else if tt.containsBoldContent != "" {
 				assert.Contains(t, got, tt.containsBoldContent)
 			}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/ox/release_notes_coverage_test.go` around lines 176 - 194, The "empty
string" test case does not perform any assertions because both
containsBoldContent and notContains are empty; update the test data for the
"empty string" case so it actually asserts expected behavior (for example set
notContains: "**" to ensure there are no bold markers after calling
highlightBold and checking via stripANSI), or alternatively set
containsBoldContent to an expected string and adjust assertions accordingly;
locate the test table entry for the case named "empty string" and the test body
invoking highlightBold and stripANSI to implement this change.
cmd/ox/session_lint_all_coverage_test.go-72-73 (1)

72-73: ⚠️ Potential issue | 🟡 Minor

Avoid parallelizing tests that write process-wide stdout.

printLintResults writes to stdout; running these tests in parallel can interleave output with other tests and create suite-level flakiness/noise.

Also applies to: 94-95

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/ox/session_lint_all_coverage_test.go` around lines 72 - 73, The tests
call printLintResults which writes to process-wide stdout, so running them with
t.Parallel() can interleave output and cause flakiness; remove t.Parallel() from
TestPrintLintResults_JSONOutput and the other related test that writes stdout
(the test at lines 94-95) so both run serially, ensuring printLintResults output
is not interleaved with other tests.
cmd/ox/session_lint_all_coverage_test.go-23-26 (1)

23-26: ⚠️ Potential issue | 🟡 Minor

Prefer a temp-derived missing directory over a hardcoded absolute path.

Using "/nonexistent/sessions/dir" is less portable. Construct a guaranteed-missing path from t.TempDir() instead.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/ox/session_lint_all_coverage_test.go` around lines 23 - 26, Replace the
hardcoded "/nonexistent/sessions/dir" with a path derived from t.TempDir() to
ensure portability: call t.TempDir(), build a child path that you do not create
(e.g. filepath.Join(t.TempDir(), "missing-sessions")), and pass that to
lintAllSessions; update the test that calls lintAllSessions to use this
temp-derived missing directory so the assertion (err != nil) remains valid.
Reference: lintAllSessions and the test invoking it.
cmd/ox/session_lint_coverage_test.go-44-44 (1)

44-44: ⚠️ Potential issue | 🟡 Minor

Use a temp-based missing path for portability.

"/nonexistent/path/raw.jsonl" is OS/environment-specific. Build a definitely-missing path from t.TempDir() and don’t create it.

Proposed fix
-	result := lintRawJSONLFile("/nonexistent/path/raw.jsonl", "missing")
+	missingPath := filepath.Join(t.TempDir(), "does-not-exist", "raw.jsonl")
+	result := lintRawJSONLFile(missingPath, "missing")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/ox/session_lint_coverage_test.go` at line 44, Replace the hard-coded
OS-specific missing file path with a path built from the test's temporary
directory: in the test where you call
lintRawJSONLFile("/nonexistent/path/raw.jsonl", "missing"), use t.TempDir() to
construct a path (e.g., filepath.Join(t.TempDir(), "raw.jsonl")) and ensure you
do not create that file before calling lintRawJSONLFile; update any variable
names accordingly so the call to lintRawJSONLFile uses the temp-based,
guaranteed-missing path.
cmd/ox/distill_coverage_test.go-157-157 (1)

157-157: ⚠️ Potential issue | 🟡 Minor

Don’t ignore filesystem setup errors in tests.

At Line 157 and other setup writes/creates in this range, ignored errors can produce false positives and hard-to-diagnose flakes if fixture creation fails.

🛠️ Suggested fix pattern
-os.WriteFile(filepath.Join(dailyDir, "2026-03-10.txt"), []byte("text"), 0o644)
+if err := os.WriteFile(filepath.Join(dailyDir, "2026-03-10.txt"), []byte("text"), 0o644); err != nil {
+	t.Fatal(err)
+}
-os.MkdirAll(filepath.Join(dailyDir, "2026-03-10"), 0o755)
+if err := os.MkdirAll(filepath.Join(dailyDir, "2026-03-10"), 0o755); err != nil {
+	t.Fatal(err)
+}

Also applies to: 174-174, 176-176, 210-212, 234-236, 253-255, 273-274, 296-298, 319-320

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/ox/distill_coverage_test.go` at line 157, Several os.WriteFile (and
similar fixture setup) calls are ignoring returned errors which can mask test
setup failures; update each setup call (e.g., the os.WriteFile calls that create
files like "2026-03-10.txt" and any os.MkdirAll or os.Create usages) to check
and handle the error by failing the test immediately (use t.Fatalf or
require.NoError(t, err)) so fixture creation errors surface; apply this change
consistently to all occurrences mentioned (the WriteFile/create calls around the
test helper functions) to avoid false positives.
cmd/ox/banner_coverage_test.go-15-39 (1)

15-39: ⚠️ Potential issue | 🟡 Minor

Potential flakiness: time.Now() captured at table definition, not at test execution.

The test table computes tt.when values using time.Now() when the table is built (lines 20-30), but the parallel subtests execute later. This time drift causes the same flakiness seen in agent_instances_coverage_test.go CI failures.

Capture the reference time inside each subtest to ensure consistency:

Proposed fix
 func TestTimeSinceError(t *testing.T) {
 	t.Parallel()

 	tests := []struct {
 		name   string
-		when   time.Time
+		offset time.Duration
 		want   string
 	}{
-		{"just now (sub-minute)", time.Now().Add(-10 * time.Second), "just now"},
-		{"exactly now", time.Now(), "just now"},
-		{"1 minute ago", time.Now().Add(-90 * time.Second), "1 minute"},
-		{"2 minutes ago", time.Now().Add(-2 * time.Minute), "2 minutes"},
-		{"59 minutes ago", time.Now().Add(-59 * time.Minute), "59 minutes"},
-		{"1 hour ago", time.Now().Add(-90 * time.Minute), "1 hour"},
-		{"2 hours ago", time.Now().Add(-2 * time.Hour), "2 hours"},
-		{"23 hours ago", time.Now().Add(-23 * time.Hour), "23 hours"},
-		{"1 day ago", time.Now().Add(-25 * time.Hour), "1 day"},
-		{"2 days ago", time.Now().Add(-48 * time.Hour), "2 days"},
-		{"7 days ago", time.Now().Add(-7 * 24 * time.Hour), "7 days"},
+		{"just now (sub-minute)", -10 * time.Second, "just now"},
+		{"exactly now", 0, "just now"},
+		{"1 minute ago", -90 * time.Second, "1 minute"},
+		{"2 minutes ago", -2 * time.Minute, "2 minutes"},
+		{"59 minutes ago", -59 * time.Minute, "59 minutes"},
+		{"1 hour ago", -90 * time.Minute, "1 hour"},
+		{"2 hours ago", -2 * time.Hour, "2 hours"},
+		{"23 hours ago", -23 * time.Hour, "23 hours"},
+		{"1 day ago", -25 * time.Hour, "1 day"},
+		{"2 days ago", -48 * time.Hour, "2 days"},
+		{"7 days ago", -7 * 24 * time.Hour, "7 days"},
 	}

 	for _, tt := range tests {
 		t.Run(tt.name, func(t *testing.T) {
 			t.Parallel()
-			got := timeSinceError(tt.when)
+			got := timeSinceError(time.Now().Add(tt.offset))
 			assert.Equal(t, tt.want, got)
 		})
 	}
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/ox/banner_coverage_test.go` around lines 15 - 39, The table currently
builds concrete time.Time values using time.Now() at declaration, causing
flakiness when subtests run later; instead store offsets (or compute the
reference time) inside each subtest and call timeSinceError with a time computed
from that fresh reference. Update the test table (tests) to use a
duration/offset field (or leave the descriptive strings) and inside the t.Run
closure capture now := time.Now() (before/when you call t.Parallel) then compute
when := now.Add(<offset>) and pass that to timeSinceError so each subtest uses a
consistent, freshly-captured reference time.
internal/ledger/ledger_coverage_test.go-279-279 (1)

279-279: ⚠️ Potential issue | 🟡 Minor

Use OS-agnostic path assertion here.

"data/github/" is slash-specific and can be flaky on Windows path separators.

Proposed fix
-	assert.Contains(t, paths[0], "data/github/")
+	assert.Contains(t, filepath.ToSlash(paths[0]), "data/github/")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/ledger/ledger_coverage_test.go` at line 279, The test asserts a
hard-coded forward-slash path ("data/github/") which is not OS-agnostic; update
the assertion that uses paths[0] (the variable under test) to compare against an
OS-correct path by either converting the actual path to a slash form with
filepath.ToSlash(paths[0]) before checking for "data/github/" or by using
filepath.Join("data", "github") as the expected substring and asserting that
paths[0] contains that; change the assert.Contains call accordingly so it works
on Windows and Unix.
internal/ledger/ledger_coverage_test.go-14-68 (1)

14-68: ⚠️ Potential issue | 🟡 Minor

Consolidate repetitive test cases into table-driven subtests.

Lines 14–68 contain three near-identical tests for ResetGitHubTypeSyncState (RemovesExistingFile, NonexistentFile, Issue). Similarly, lines 175–274 group five tests for ListGitHubDataFiles and lines 276–294 group three tests for ComputeGitHubDataPaths. Convert these to table-driven tests := []struct{...} + t.Run(...) patterns to reduce duplication and improve maintainability.

Per coding guidelines, **/*_test.go should use table-driven tests.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/ledger/ledger_coverage_test.go` around lines 14 - 68, These three
near-duplicate tests for ResetGitHubTypeSyncState should be converted into a
single table-driven test: create a slice of test cases (name, setup function or
initial state, target type string, expected post-reset assertions) and iterate
with t.Run for each case, reusing WriteGitHubTypeSyncState to create initial
files and ReadGitHubTypeSyncState to assert zero-value state after calling
ResetGitHubTypeSyncState; apply the same pattern to the groups of tests that
exercise ListGitHubDataFiles and ComputeGitHubDataPaths by turning their
repeated scenarios into table-driven subtests (tests := []struct{ name string;
input ...; want ... } and t.Run) to replace the multiple standalone test
functions and remove duplication while keeping the same assertions.
internal/ui/styles_coverage_test.go-63-77 (1)

63-77: ⚠️ Potential issue | 🟡 Minor

Assert the documented empty-input path in TestRenderVisibility.

Line 69 documents empty-input behavior, but "" is not part of the test table, so that path is currently unverified.

✅ Minimal test-gap fix
 	tests := []struct {
-		input string
+		input     string
+		wantEmpty bool
 	}{
-		{"public"},
-		{"private"},
-		{"unknown"},
+		{"public", false},
+		{"private", false},
+		{"unknown", false},
+		{"", true},
 		// empty input returns empty — expected behavior
 	}

 	for _, tt := range tests {
 		t.Run(tt.input, func(t *testing.T) {
 			t.Parallel()
 			got := RenderVisibility(tt.input)
-			assert.NotEmpty(t, got)
+			if tt.wantEmpty {
+				assert.Empty(t, got)
+			} else {
+				assert.NotEmpty(t, got)
+			}
 		})
 	}
As per coding guidelines: `**/*_test.go`: Use table-driven tests and test error paths in Go code.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/ui/styles_coverage_test.go` around lines 63 - 77, The table-driven
test for TestRenderVisibility omits the documented empty-input case — add an
entry with input "" to the tests slice and update the per-case assertions: when
tt.input == "" assert that RenderVisibility(tt.input) returns an empty string
(e.g., assert.Empty) and for other inputs assert.NotEmpty; reference the tests
slice and the call to RenderVisibility in the test body to implement this
conditional expectation.
cmd/ox/hooks_user_marker_coverage_test.go-72-92 (1)

72-92: ⚠️ Potential issue | 🟡 Minor

Potential test flakiness due to timestamp boundary.

The timestamp format "2006-01-02-15" includes the hour. If the test runs exactly at an hour boundary (e.g., 10:59:59.999), backupUserAgentFile and time.Now() in the assertion could produce different hours, causing a spurious failure.

🔧 Proposed fix to reduce flakiness
 	// verify backup was created with timestamp format
-	timestamp := time.Now().Format("2006-01-02-15")
-	expectedBackup := filePath + ".bak-" + timestamp
-
-	backupContent, err := os.ReadFile(expectedBackup)
-	if err != nil {
-		t.Fatalf("expected backup file at %s, got error: %v", expectedBackup, err)
+	entries, err := os.ReadDir(tmpDir)
+	if err != nil {
+		t.Fatal(err)
+	}
+	var backupPath string
+	for _, e := range entries {
+		if strings.HasPrefix(e.Name(), "CLAUDE.md.bak-") {
+			backupPath = filepath.Join(tmpDir, e.Name())
+			break
+		}
+	}
+	if backupPath == "" {
+		t.Fatal("expected backup file to be created")
+	}
+	backupContent, err := os.ReadFile(backupPath)
+	if err != nil {
+		t.Fatalf("read backup: %v", err)
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/ox/hooks_user_marker_coverage_test.go` around lines 72 - 92, The test is
flaky because it uses time.Now() after calling backupUserAgentFile to build the
expected backup filename (timestamp includes hour) so a boundary can change the
hour; fix by locating the actual backup created instead of relying on current
time—after calling backupUserAgentFile(filePath, content) use
filepath.Glob(filePath+".bak-*") (or capture now := time.Now() before calling
backupUserAgentFile) to find the single backup file, assert exactly one match,
read that file and compare its content to content; reference
TestBackupUserAgentFile and backupUserAgentFile when updating the test.
internal/config/github_sync_test.go-55-101 (1)

55-101: ⚠️ Potential issue | 🟡 Minor

Consolidate resolver env tests into table-driven cases and add invalid-value error paths for PRs/Issues.

These cases are repetitive and currently miss explicit invalid env-value fallback tests for OX_GITHUB_SYNC_PRS and OX_GITHUB_SYNC_ISSUES.

♻️ Suggested table-driven structure
+func TestResolveGitHubSync_FeatureEnv(t *testing.T) {
+	tests := []struct {
+		name   string
+		envKey string
+		envVal string
+		call   func(string) string
+		want   string
+	}{
+		{"prs disabled override", "OX_GITHUB_SYNC_PRS", "disabled", ResolveGitHubSyncPRs, GitHubSyncDisabled},
+		{"prs invalid fallback", "OX_GITHUB_SYNC_PRS", "invalid", ResolveGitHubSyncPRs, GitHubSyncEnabled},
+		{"issues disabled override", "OX_GITHUB_SYNC_ISSUES", "disabled", ResolveGitHubSyncIssues, GitHubSyncDisabled},
+		{"issues invalid fallback", "OX_GITHUB_SYNC_ISSUES", "invalid", ResolveGitHubSyncIssues, GitHubSyncEnabled},
+	}
+
+	for _, tt := range tests {
+		t.Run(tt.name, func(t *testing.T) {
+			t.Setenv(tt.envKey, tt.envVal)
+			assert.Equal(t, tt.want, tt.call(""))
+		})
+	}
+}

As per coding guidelines: **/*_test.go: Use table-driven tests and test error paths in Go code.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/config/github_sync_test.go` around lines 55 - 101, Consolidate the
repetitive TestResolveGitHubSync* tests into table-driven tests that iterate
cases for ResolveGitHubSync, ResolveGitHubSyncPRs, and ResolveGitHubSyncIssues;
for each function include cases for default behavior, explicit "disabled" env
overrides (OX_GITHUB_SYNC, OX_GITHUB_SYNC_PRS, OX_GITHUB_SYNC_ISSUES) and also
add cases where the env var is set to an invalid value (e.g. "invalid")
asserting the expected fallback (normalized to enabled for ResolveGitHubSync and
add equivalent assertions for ResolveGitHubSyncPRs and ResolveGitHubSyncIssues);
implement the table-driven test(s) in internal/config/github_sync_test.go using
subtests (t.Run) and t.Setenv within each case to ensure isolation and remove
the duplicated individual tests.
internal/daemon/daemon_coverage_test.go-20-33 (1)

20-33: ⚠️ Potential issue | 🟡 Minor

not_set currently means “set to empty string.”

Line 31 runs for every row, so the first case never exercises a truly unset SAGEOX_DAEMON. That misses the real default-path behavior if unset and empty are handled differently.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/daemon/daemon_coverage_test.go` around lines 20 - 33, Test case
"not_set" currently uses t.Setenv("SAGEOX_DAEMON", "") which sets an empty value
instead of removing the env var; update the table/loop so the "not_set" case
actually unsets the environment before calling IsDaemonDisabled(): capture the
original value (os.LookupEnv), and in the loop if tt.name == "not_set" call
os.Unsetenv("SAGEOX_DAEMON") (or restore to absent after the test), otherwise
use t.Setenv("SAGEOX_DAEMON", tt.envVal); ensure you restore the original
environment after each subtest so other cases remain isolated — this targets the
test harness around IsDaemonDisabled() and the use of t.Setenv in the subtest
loop.
cmd/ox/agent_session_plan_history_coverage_test.go-278-330 (1)

278-330: ⚠️ Potential issue | 🟡 Minor

Check os.WriteFile errors to avoid false-positive test behavior.

Line 278, Line 310, and Line 330 ignore write errors; if a write fails, downstream failures become misleading.

Patch suggestion
-		os.WriteFile(f, []byte(content), 0644)
+		if err := os.WriteFile(f, []byte(content), 0644); err != nil {
+			t.Fatalf("write test file: %v", err)
+		}
@@
-		os.WriteFile(f, []byte(content), 0644)
+		if err := os.WriteFile(f, []byte(content), 0644); err != nil {
+			t.Fatalf("write test file: %v", err)
+		}
@@
-		os.WriteFile(f, []byte(content), 0644)
+		if err := os.WriteFile(f, []byte(content), 0644); err != nil {
+			t.Fatalf("write test file: %v", err)
+		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/ox/agent_session_plan_history_coverage_test.go` around lines 278 - 330,
Multiple os.WriteFile calls in the test cases are unchecked which can produce
misleading failures; update each os.WriteFile call in the tests (the ones inside
the t.Run blocks that call readPlanHistoryEntries) to capture and check its
returned error and call t.Fatalf (or t.Fatalf with the error) when it is non-nil
so the test fails immediately on write failure rather than producing downstream
false positives when readPlanHistoryEntries is exercised.
cmd/ox/agent_session_plan_history_coverage_test.go-341-344 (1)

341-344: ⚠️ Potential issue | 🟡 Minor

Use a temp-dir missing path instead of hardcoded /nonexistent/....

Hardcoding an absolute path is less portable and can be flaky across environments. Build a guaranteed-missing path under t.TempDir() instead.

Patch suggestion
-		_, _, err := readPlanHistoryEntries("/nonexistent/file.jsonl")
+		missing := filepath.Join(t.TempDir(), "does-not-exist.jsonl")
+		_, _, err := readPlanHistoryEntries(missing)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/ox/agent_session_plan_history_coverage_test.go` around lines 341 - 344,
Test uses a hardcoded absolute path "/nonexistent/file.jsonl" which is not
portable; modify the t.Run("nonexistent file") subtest to build a
guaranteed-missing path under t.TempDir() (use t.TempDir() and filepath.Join to
compose e.g. "nonexistent.jsonl") and pass that path to readPlanHistoryEntries
so the test reliably fails to find the file across environments (refer to the
test block and the call to readPlanHistoryEntries).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 45c144b0-8dd7-4604-9df9-21ea418c300a

📥 Commits

Reviewing files that changed from the base of the PR and between ad8061d and e2f7ff0.

📒 Files selected for processing (74)
  • cmd/ox/agent_capture_prior_coverage_test.go
  • cmd/ox/agent_doctor_coverage_test.go
  • cmd/ox/agent_hook_coverage_test.go
  • cmd/ox/agent_instances_coverage_test.go
  • cmd/ox/agent_prime_coverage_test.go
  • cmd/ox/agent_query_coverage_test.go
  • cmd/ox/agent_session_coverage_test.go
  • cmd/ox/agent_session_plan_history_coverage_test.go
  • cmd/ox/banner_coverage_test.go
  • cmd/ox/config_get_coverage_test.go
  • cmd/ox/config_settings_coverage_test.go
  • cmd/ox/config_tui_coverage_test.go
  • cmd/ox/distill_coverage_test.go
  • cmd/ox/doctor_auth_coverage_test.go
  • cmd/ox/doctor_display_test.go
  • cmd/ox/doctor_extra_coverage_test.go
  • cmd/ox/doctor_git_fix_coverage_test.go
  • cmd/ox/doctor_header_coverage_test.go
  • cmd/ox/doctor_hooks_coverage_test.go
  • cmd/ox/doctor_pure_logic_test.go
  • cmd/ox/doctor_registry_coverage_test.go
  • cmd/ox/doctor_session_commit_coverage_test.go
  • cmd/ox/doctor_session_incomplete_coverage_test.go
  • cmd/ox/doctor_session_upload_retry_coverage_test.go
  • cmd/ox/doctor_types_test.go
  • cmd/ox/friction_coverage_test.go
  • cmd/ox/heartbeat_coverage_test.go
  • cmd/ox/hooks_agent_coverage_test.go
  • cmd/ox/hooks_claude_coverage_test.go
  • cmd/ox/hooks_codepuppy_coverage_test.go
  • cmd/ox/hooks_gemini_coverage_test.go
  • cmd/ox/hooks_opencode_coverage_test.go
  • cmd/ox/hooks_user_marker_coverage_test.go
  • cmd/ox/import_coverage_test.go
  • cmd/ox/init_coverage_test.go
  • cmd/ox/init_helpers_coverage_test.go
  • cmd/ox/login_coverage_test.go
  • cmd/ox/prime_session_marker_coverage_test.go
  • cmd/ox/release_notes_coverage_test.go
  • cmd/ox/root_coverage_test.go
  • cmd/ox/session_commit_coverage_test.go
  • cmd/ox/session_helpers_coverage_test.go
  • cmd/ox/session_lint_all_coverage_test.go
  • cmd/ox/session_lint_coverage_test.go
  • cmd/ox/session_list_coverage_test.go
  • cmd/ox/session_regenerate_coverage_test.go
  • cmd/ox/session_show_coverage_test.go
  • cmd/ox/session_status_coverage_test.go
  • cmd/ox/status_coverage_test.go
  • cmd/ox/status_extra_coverage_test.go
  • cmd/ox/status_render_coverage_test.go
  • cmd/ox/sync_coverage_test.go
  • cmd/ox/teams_coverage_test.go
  • cmd/ox/url_fallback_coverage_test.go
  • cmd/ox/view_coverage_test.go
  • internal/cli/cli_coverage_test.go
  • internal/config/attribution_test.go
  • internal/config/github_sync_test.go
  • internal/daemon/coverage_test.go
  • internal/daemon/daemon_coverage_test.go
  • internal/daemon/status_display_test.go
  • internal/doctor/doctor_coverage_test.go
  • internal/fileutil/atomic_coverage_test.go
  • internal/gitserver/checkout_extra_test.go
  • internal/gitserver/coverage_extra_test.go
  • internal/gitserver/gitserver_coverage_test.go
  • internal/gitserver/remote_extra_test.go
  • internal/ledger/ledger_coverage_test.go
  • internal/session/errors_test.go
  • internal/session/markdown_coverage_test.go
  • internal/session/session_coverage_test.go
  • internal/telemetry/telemetry_coverage_test.go
  • internal/ui/styles_coverage_test.go
  • internal/whisper/store/store_coverage_test.go
💤 Files with no reviewable changes (5)
  • internal/gitserver/remote_extra_test.go
  • internal/gitserver/checkout_extra_test.go
  • cmd/ox/doctor_types_test.go
  • internal/daemon/coverage_test.go
  • cmd/ox/doctor_pure_logic_test.go

Comment thread cmd/ox/agent_instances_coverage_test.go Outdated
Comment on lines +71 to +119
func TestSetConfigValue_UnknownSetting(t *testing.T) {
t.Parallel()

err := SetConfigValue("totally_fake", "value", ConfigLevelUser, "")
assert.Error(t, err)
assert.Contains(t, err.Error(), "unknown setting")
}

func TestSetConfigValue_InvalidValue(t *testing.T) {
t.Parallel()

err := SetConfigValue("session_recording", "invalid_mode", ConfigLevelUser, "")
assert.Error(t, err)
assert.Contains(t, err.Error(), "invalid value")
assert.Contains(t, err.Error(), "valid values")
}

func TestSetConfigValue_UnsupportedLevel(t *testing.T) {
t.Parallel()

// telemetry only supports user level
err := SetConfigValue("telemetry", "on", ConfigLevelRepo, "")
assert.Error(t, err)
assert.Contains(t, err.Error(), "cannot be set at repo level")
}

func TestSetConfigValue_DefaultLevel(t *testing.T) {
t.Parallel()

err := SetConfigValue("session_recording", "auto", ConfigLevelDefault, "")
assert.Error(t, err)
assert.Contains(t, err.Error(), "cannot be set at default level")
}

func TestSetConfigValue_RepoLevelWithoutProject(t *testing.T) {
t.Parallel()

err := SetConfigValue("session_recording", "auto", ConfigLevelRepo, "")
assert.Error(t, err)
assert.Contains(t, err.Error(), "not in a SageOx project")
}

func TestSetConfigValue_TeamLevelWithoutProject(t *testing.T) {
t.Parallel()

err := SetConfigValue("session_recording", "auto", ConfigLevelTeam, "")
assert.Error(t, err)
assert.Contains(t, err.Error(), "not in a SageOx project")
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Use an explicit temp root for the repo/team negative cases, and fold this block into one table.

Passing "" leaves root resolution implicit, so the repo/team cases depend on ambient process state instead of a known non-project directory. This block is also repetitive enough to be a single table-driven test with per-case roots and expected substrings.

Minimal fix for the non-project cases
 func TestSetConfigValue_RepoLevelWithoutProject(t *testing.T) {
 	t.Parallel()
 
-	err := SetConfigValue("session_recording", "auto", ConfigLevelRepo, "")
-	assert.Error(t, err)
+	root := t.TempDir()
+	err := SetConfigValue("session_recording", "auto", ConfigLevelRepo, root)
+	require.Error(t, err)
 	assert.Contains(t, err.Error(), "not in a SageOx project")
 }
 
 func TestSetConfigValue_TeamLevelWithoutProject(t *testing.T) {
 	t.Parallel()
 
-	err := SetConfigValue("session_recording", "auto", ConfigLevelTeam, "")
-	assert.Error(t, err)
+	root := t.TempDir()
+	err := SetConfigValue("session_recording", "auto", ConfigLevelTeam, root)
+	require.Error(t, err)
 	assert.Contains(t, err.Error(), "not in a SageOx project")
 }

As per coding guidelines "Use table-driven tests and test error paths in Go code."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/ox/config_settings_coverage_test.go` around lines 71 - 119, Replace the
implicit-root negative repo/team tests with a single table-driven test that
calls SetConfigValue for both repo and team levels using an explicit temporary
non-project directory (use t.TempDir()) as the root for each case; for each
table entry include the level (ConfigLevelRepo or ConfigLevelTeam), the root
path (the t.TempDir() value), and the expected substring ("not in a SageOx
project") and assert error contains that substring. Locate and update the tests
TestSetConfigValue_RepoLevelWithoutProject and
TestSetConfigValue_TeamLevelWithoutProject to be folded into this new
table-driven test that invokes SetConfigValue("session_recording", "auto",
level, root).

Comment thread cmd/ox/config_settings_coverage_test.go
Comment on lines +200 to +243
func TestResolveConfigValue_KnownSettings(t *testing.T) {
t.Parallel()

// resolve with empty project root — should return defaults
for _, setting := range AllSettings {
t.Run(setting.Key, func(t *testing.T) {
t.Parallel()
cv, err := ResolveConfigValue(setting.Key, "")
require.NoError(t, err)
require.NotNil(t, cv)
assert.Equal(t, setting.Key, cv.Key)
assert.Equal(t, setting.Default, cv.Default)
// with no project root, only user config is loaded
assert.Empty(t, cv.RepoVal)
assert.Empty(t, cv.TeamVal)
})
}
}

func TestResolveConfigValue_WithFakeProjectRoot(t *testing.T) {
t.Parallel()

dir := t.TempDir()
cv, err := ResolveConfigValue("session_recording", dir)
require.NoError(t, err)
require.NotNil(t, cv)
assert.Equal(t, "session_recording", cv.Key)
assert.Equal(t, "auto", cv.Default)
}

func TestResolveConfigValue_AllKnownKeys(t *testing.T) {
t.Parallel()

keys := []string{"session_recording", "telemetry", "tips", "context_git.auto_commit", "context_git.auto_push", "view_format", "agent_worker"}
for _, key := range keys {
t.Run(key, func(t *testing.T) {
t.Parallel()
cv, err := ResolveConfigValue(key, t.TempDir())
require.NoError(t, err)
require.NotNil(t, cv)
assert.Equal(t, key, cv.Key)
})
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

These ResolveConfigValue tests only validate metadata, not resolution.

cv.Key and cv.Default show that the setting exists, but they do not prove the effective value actually fell back to the default. A precedence bug would still pass here because cv.Value and cv.Source are never asserted.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cmd/ox/config_settings_coverage_test.go` around lines 200 - 243, The tests in
TestResolveConfigValue_KnownSettings, TestResolveConfigValue_WithFakeProjectRoot
and TestResolveConfigValue_AllKnownKeys only check metadata (cv.Key, cv.Default)
but never assert the effective resolved value or its provenance; update each
test to also assert that cv.Value equals the expected effective value (for the
first test assert cv.Value == setting.Default) and that cv.Source indicates the
default/source used (e.g., compare to the package's default source constant or
the string "default"); do the same for
TestResolveConfigValue_WithFakeProjectRoot and
TestResolveConfigValue_AllKnownKeys (assert cv.Value and cv.Source reflect the
actual resolved value for the given project root).

Comment thread cmd/ox/config_tui_coverage_test.go
Comment thread internal/daemon/status_display_test.go Outdated
Comment thread internal/session/markdown_coverage_test.go
Comment thread internal/whisper/store/store_coverage_test.go
Comment thread internal/whisper/store/store_coverage_test.go Outdated
Comment thread internal/whisper/store/store_coverage_test.go Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
internal/daemon/status_display_test.go (1)

122-138: NotEmpty-only checks are too weak for formatVersionMatch and formatWSStatus.

These tests can pass while semantics regress. Add assertions that distinguish match vs mismatch/version-missing outputs and synced vs error/missing workspace outputs.

As per coding guidelines, **/*_test.go: Use table-driven tests and test error paths in Go code.

Also applies to: 216-234

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/daemon/status_display_test.go` around lines 122 - 138, Tests for
formatVersionMatch and formatWSStatus are too weak because they only assert
NotEmpty; replace them with table-driven tests that assert specific expected
outputs for match vs mismatch vs missing daemon/version and for workspace synced
vs error/missing cases. Update TestFormatVersionMatch to iterate test cases
(e.g., {"daemon":"0.15.0","cli":"0.15.0","wantContains":"match"} etc.) and
assert the returned string contains or equals the expected marker/text for
match, mismatch, or missing version; likewise convert the formatWSStatus tests
into a table-driven loop asserting exact or contains-based expectations for
synced vs error/missing workspace, and include negative cases. Use the existing
function names formatVersionMatch and formatWSStatus to locate the code and keep
assertions strict (e.g., strings.Contains or exact equality) instead of
NotEmpty.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/daemon/status_display_test.go`:
- Around line 141-158: TestDetermineHealth is too loose: for RecentErrorCount:5
it asserts only NotEqual instead of a concrete severity. Replace the current
subtests with a table-driven test for determineHealth that enumerates boundary
cases (e.g., RecentErrorCount 0 -> HealthHealthy, counts at warning threshold ->
HealthWarning, count 5 -> HealthCritical), using StatusData entries and
asserting exact expected values (HealthHealthy, HealthWarning, HealthCritical).
Update the test function (TestDetermineHealth) to iterate over cases and use
t.Run with descriptive names so threshold regressions are caught precisely.
- Around line 74-90: TestShortenPath_StatusDisplay only ensures shortenPath
doesn't panic; update the test to assert expected outputs: for non-empty inputs
assert got is non-empty and for the "empty" case assert got == "" ; additionally
add specific behavior assertions for known inputs (e.g., verify "/tmp/project"
returns a shortened/unchanged path and "/Users/user/Documents/Code/project" is
shortened to include "~" or whatever shortenPath is specified to do) so failures
reflect incorrect shortening logic; locate the test
TestShortenPath_StatusDisplay and the shortenPath function to implement these
assertions.

---

Nitpick comments:
In `@internal/daemon/status_display_test.go`:
- Around line 122-138: Tests for formatVersionMatch and formatWSStatus are too
weak because they only assert NotEmpty; replace them with table-driven tests
that assert specific expected outputs for match vs mismatch vs missing
daemon/version and for workspace synced vs error/missing cases. Update
TestFormatVersionMatch to iterate test cases (e.g.,
{"daemon":"0.15.0","cli":"0.15.0","wantContains":"match"} etc.) and assert the
returned string contains or equals the expected marker/text for match, mismatch,
or missing version; likewise convert the formatWSStatus tests into a
table-driven loop asserting exact or contains-based expectations for synced vs
error/missing workspace, and include negative cases. Use the existing function
names formatVersionMatch and formatWSStatus to locate the code and keep
assertions strict (e.g., strings.Contains or exact equality) instead of
NotEmpty.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 34d4d64a-07ff-4fc8-9018-8211d95e4225

📥 Commits

Reviewing files that changed from the base of the PR and between e2f7ff0 and 9489d6c.

📒 Files selected for processing (2)
  • cmd/ox/agent_instances_coverage_test.go
  • internal/daemon/status_display_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • cmd/ox/agent_instances_coverage_test.go

Comment thread internal/daemon/status_display_test.go
Comment thread internal/daemon/status_display_test.go Outdated
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant