🎯 Areas for Improvement
1. Replace Manual Assertions with Testify
Every assertion in this file uses raw if checks with t.Error/t.Errorf. Testify produces far better failure messages and is the standard across this codebase.
Current Issues:
// ❌ CURRENT — verbose manual checks with poor failure output
if len(findings) < tt.expectedCount {
t.Errorf("Expected at least %d findings, got %d", tt.expectedCount, len(findings))
}
if !found {
t.Error("Expected failure finding but didn't find one")
}
if finding.Severity != "critical" {
t.Errorf("Expected critical severity for failure, got %s", finding.Severity)
}
// Validation loop for required fields:
for _, rec := range recommendations {
if rec.Action == "" {
t.Error("Recommendation missing action")
}
if rec.Reason == "" {
t.Error("Recommendation missing reason")
}
}
Recommended Changes:
// ✅ IMPROVED — testify assertions
assert.GreaterOrEqual(t, len(findings), tt.expectedCount,
"should generate at least %d findings", tt.expectedCount)
// Finding category search → use a helper + assert
failureFindings := filterFindingsByCategory(findings, "error")
require.NotEmpty(t, failureFindings, "should produce a failure finding for conclusion=failure")
assert.Equal(t, "critical", failureFindings[0].Severity, "failure finding should be critical severity")
// Required field validation
for i, rec := range recommendations {
assert.NotEmpty(t, rec.Action, "recommendation[%d] must have an Action", i)
assert.NotEmpty(t, rec.Reason, "recommendation[%d] must have a Reason", i)
assert.NotEmpty(t, rec.Priority,"recommendation[%d] must have a Priority", i)
}
Why this matters: Testify provides structured diffs on failure and is the repository standard (see scratchpad/testing.md).
2. Import testify packages
The file currently imports only standard library packages and the internal workflow package. Adding testify requires a small import change:
// ❌ CURRENT
import (
"encoding/json"
"fmt"
"strings"
"testing"
"time"
"github.com/github/gh-aw/pkg/workflow"
)
// ✅ IMPROVED
import (
"encoding/json"
"fmt"
"strings"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/github/gh-aw/pkg/workflow"
)
3. Category-search Pattern — Extract Helper or Use assert.Condition
Each table-driven test case with hasFailure/hasCost/hasTooling booleans repeats a search loop:
// ❌ CURRENT — repeated pattern in every sub-test
found := false
for _, finding := range findings {
if finding.Category == "cost" {
found = true
break
}
}
if !found {
t.Error("Expected cost finding but didn't find one")
}
Option A — small helper (preferred):
func findingByCategory(findings []Finding, category string) (Finding, bool) {
for _, f := range findings {
if f.Category == category {
return f, true
}
}
return Finding{}, false
}
// In the test:
if tt.hasCost {
f, ok := findingByCategory(findings, "cost")
assert.True(t, ok, "expected a cost finding")
_ = f // assert more fields if needed
}
Option B — inline with assert.Condition:
if tt.hasCost {
assert.Condition(t, func() bool {
for _, f := range findings { if f.Category == "cost" { return true } }
return false
}, "expected a cost category finding")
}
4. require for Critical Setup, assert for Validations
The test should use require when a nil/empty result would cause a panic in subsequent assertions:
// ❌ CURRENT — no guard against nil result
findings := generateFindings(processedRun, tt.metrics, tt.errors)
if len(findings) < tt.expectedCount { ... }
// ✅ IMPROVED — guard with require when result must not be nil/empty
findings := generateFindings(processedRun, tt.metrics, tt.errors)
// Only add require if nil would panic:
// require.NotNil(t, findings, "generateFindings must return a non-nil slice")
assert.GreaterOrEqual(t, len(findings), tt.expectedCount, ...)
5. JSON Structure Test Improvements
TestAuditDataJSONStructure uses raw json.Unmarshal + manual field checks. Testify's assert.Equal works directly on the marshaled result:
// ❌ CURRENT
var data map[string]interface{}
err = json.Unmarshal(jsonBytes, &data)
if err != nil {
t.Fatalf("Failed to unmarshal JSON: %v", err)
}
if _, ok := data["workflow_run"]; !ok {
t.Error("JSON missing 'workflow_run' key")
}
// ✅ IMPROVED
var data map[string]interface{}
require.NoError(t, json.Unmarshal(jsonBytes, &data), "JSON must unmarshal cleanly")
assert.Contains(t, data, "workflow_run", "JSON should contain 'workflow_run' key")
assert.Contains(t, data, "findings", "JSON should contain 'findings' key")
assert.Contains(t, data, "metrics", "JSON should contain 'metrics' key")
Overview
The test file
./pkg/cli/audit_agent_output_test.gohas been selected for quality improvement by the Testify Uber Super Expert. This 469-line file contains 4 test functions coveringgenerateFindings,generateRecommendations,generatePerformanceMetrics, and JSON structure — but relies entirely on manualif/t.Errorpatterns with zero testify assertions, despite the rest of the codebase using testify extensively.Current State
./pkg/cli/audit_agent_output_test.go./pkg/cli/audit_report_analysis.got.Error/t.Errorfcalls: 24Strengths ✅
t.Run()throughout — good structuregenerateFindings,generateRecommendations,generatePerformanceMetrics) are all represented in tests🎯 Areas for Improvement
1. Replace Manual Assertions with Testify
Every assertion in this file uses raw
ifchecks witht.Error/t.Errorf. Testify produces far better failure messages and is the standard across this codebase.Current Issues:
Recommended Changes:
Why this matters: Testify provides structured diffs on failure and is the repository standard (see
scratchpad/testing.md).2. Import testify packages
The file currently imports only standard library packages and the internal
workflowpackage. Adding testify requires a small import change:3. Category-search Pattern — Extract Helper or Use
assert.ConditionEach table-driven test case with
hasFailure/hasCost/hasToolingbooleans repeats a search loop:Option A — small helper (preferred):
Option B — inline with
assert.Condition:4.
requirefor Critical Setup,assertfor ValidationsThe test should use
requirewhen a nil/empty result would cause a panic in subsequent assertions:5. JSON Structure Test Improvements
TestAuditDataJSONStructureuses rawjson.Unmarshal+ manual field checks. Testify'sassert.Equalworks directly on the marshaled result:📋 Implementation Guidelines
Priority Order
testify/assertandtestify/requireimportst.Error/t.Errorfcalls with testify equivalentsfindingByCategory/findingsByCategoryhelper to eliminate repeated search loopsrequire.NoErrorfort.Fatalf("Failed to ... %v", err)patternsassert.GreaterOrEqual/assert.Lenfor slice length comparisonsBest Practices from
scratchpad/testing.mdrequire.*for critical setup (stops test on failure)assert.*for test validations (continues checking)t.Run()and descriptive namesTesting Commands
Acceptance Criteria
testify/assertandtestify/requireimportedt.Error/t.Errorfcalls replaced with testify assertionst.Fatalfin JSON test replaced withrequire.NoErrorassert.Conditiongo test -v ./pkg/cli/ -run "TestKeyFindings|TestRecommendations|TestPerformanceMetrics|TestAuditData"scratchpad/testing.mdAdditional Context
scratchpad/testing.mdfor comprehensive testing patternspkg/agentdrain/miner_test.goshows exemplary testify usage patternsPriority: Medium
Effort: Small (mechanical find-and-replace of assertion patterns)
Expected Impact: Cleaner test failure messages, alignment with codebase standards
Files Involved:
./pkg/cli/audit_agent_output_test.go./pkg/cli/audit_report_analysis.goReferences: