diff --git a/docs/adr/27825-reorganize-compile-pipeline-helpers-by-single-responsibility.md b/docs/adr/27825-reorganize-compile-pipeline-helpers-by-single-responsibility.md new file mode 100644 index 0000000000..8f37ff7353 --- /dev/null +++ b/docs/adr/27825-reorganize-compile-pipeline-helpers-by-single-responsibility.md @@ -0,0 +1,70 @@ +# ADR-27825: Reorganize Compile Pipeline Helpers by Single-Responsibility Concern + +**Date**: 2026-04-22 +**Status**: Draft +**Deciders**: pelikhan + +--- + +## Part 1 — Narrative (Human-Friendly) + +### Context + +The `pkg/cli/compile_*.go` package grew organically as new compilation features were added. By the time of this decision, three structural problems had accumulated: (1) `CompilationStats` and `WorkflowFailure` types were defined in `compile_config.go` while their operations (`trackWorkflowFailure`, `printCompilationSummary`) lived in `compile_file_operations.go`, splitting a cohesive abstraction across two files; (2) `compile_post_processing.go` bundled manifest/workflow generation wrappers alongside infrastructure side-effects such as updating `.gitattributes` and persisting the action cache; (3) each directory-scanning tool (poutine, runner-guard) had its own private pass-through wrapper that duplicated the same strict/non-strict error-handling pattern verbatim. + +### Decision + +We will reorganize the compile pipeline helpers so that each file owns exactly one concern: types and operations for a concept live together, infrastructure side-effects are isolated from workflow-generation logic, and repeated patterns are unified into a single shared helper. Concretely: `compile_stats.go` owns all stats types and operations; `compile_infrastructure.go` owns `.gitattributes` and action-cache side-effects; `compile_file_operations.go` gains the path helper `getAbsoluteWorkflowDir`; and `runBatchDirectoryTool` replaces four near-identical pass-through wrappers. + +### Alternatives Considered + +#### Alternative 1: Keep the existing file layout and add documentation + +Rather than moving code, add package-level comments and godoc cross-references to explain where things live. This is low-risk and requires no diff. It was rejected because documentation describes structure but does not fix it — the root cause is that related code is in different files, which forces developers to open multiple files to understand or modify a single abstraction. Documentation decays as code changes; co-location is self-enforcing. + +#### Alternative 2: Extract compile infrastructure into a separate internal package + +Move the infrastructure helpers (`updateGitAttributes`, `saveActionCache`) into a new `pkg/cli/compileinfra` sub-package. This maximises isolation and makes the dependency graph explicit. It was not chosen because the helpers are small, tightly coupled to the compile pipeline's types, and a new package would add import indirection without meaningful boundary enforcement — the sub-package and its caller would still be in the same binary and changed together. A file boundary is sufficient for this level of separation. + +### Consequences + +#### Positive +- Types and their operations now live in the same file, making navigation predictable (open `compile_stats.go` to understand compilation statistics end-to-end). +- The batch wrapper duplication is eliminated: four near-identical functions collapse into one parameterised helper, reducing the surface for drift bugs. +- `compile_post_processing.go` becomes focused on manifest and workflow generation, making its purpose clear from its name. +- The strict/non-strict error-handling policy for directory tools is now tested in one place (`compile_batch_operations_test.go`). + +#### Negative +- The refactor is a non-trivial file churn (eight files changed) that creates a large, hard-to-review diff with no functional change. Reviewers must verify that no logic was lost during the moves. +- Any external tools or scripts that relied on the unexported symbol locations (e.g., `grep`-based tooling or generated stubs) may need updating. + +#### Neutral +- `compile_pipeline.go` call sites are updated to reference the unified helpers directly; callers are functionally equivalent but read slightly differently. +- The directory structure of `pkg/cli/` grows by two files (`compile_infrastructure.go`, `compile_stats.go`), which is consistent with the existing naming convention. + +--- + +## Part 2 — Normative Specification (RFC 2119) + +> The key words **MUST**, **MUST NOT**, **REQUIRED**, **SHALL**, **SHALL NOT**, **SHOULD**, **SHOULD NOT**, **RECOMMENDED**, **MAY**, and **OPTIONAL** in this section are to be interpreted as described in [RFC 2119](https://www.rfc-editor.org/rfc/rfc2119). + +### File Ownership + +1. The `compile_stats.go` file **MUST** contain the `CompilationStats` and `WorkflowFailure` type definitions and all functions that operate on those types (`trackWorkflowFailure`, `printCompilationSummary`, `collectWorkflowStatisticsWrapper`). +2. The `compile_infrastructure.go` file **MUST** contain infrastructure side-effect helpers (`updateGitAttributes`, `saveActionCache`) and **MUST NOT** contain workflow-generation or manifest logic. +3. The `compile_post_processing.go` file **MUST NOT** contain infrastructure side-effect helpers; it **MUST** be limited to manifest and workflow generation wrappers. +4. Path and file system helpers (functions that compute or manipulate file paths) **SHOULD** reside in `compile_file_operations.go`. + +### Batch Tool Helpers + +1. New directory-scanning batch tool wrappers **MUST** delegate to `runBatchDirectoryTool` rather than duplicating the strict/non-strict error-handling pattern inline. +2. New lock-file batch tool wrappers **MUST** delegate to `runBatchLockFileTool` rather than duplicating the empty-list guard and logging pattern inline. +3. Pass-through wrapper functions that add no logic beyond calling a single public function **MUST NOT** be introduced. + +### Conformance + +An implementation is considered conformant with this ADR if it satisfies all **MUST** and **MUST NOT** requirements above. Failure to meet any **MUST** or **MUST NOT** requirement constitutes non-conformance. + +--- + +*This is a DRAFT ADR generated by the [Design Decision Gate](https://github.com/github/gh-aw/actions/runs/24780779719) workflow. The PR author must review, complete, and finalize this document before the PR can merge.* diff --git a/pkg/cli/compile_batch_operations.go b/pkg/cli/compile_batch_operations.go index 3e68509a34..49c80e3067 100644 --- a/pkg/cli/compile_batch_operations.go +++ b/pkg/cli/compile_batch_operations.go @@ -14,7 +14,7 @@ // # Key Functions // // Batch Linting: -// - runBatchActionlint() - Run actionlint on multiple lock files +// - RunActionlintOnFiles() - Run actionlint on multiple lock files // // File Cleanup: // - purgeOrphanedLockFiles() - Remove orphaned .lock.yml files @@ -40,19 +40,13 @@ var compileBatchOperationsLog = logger.New("cli:compile_batch_operations") // RunActionlintOnFiles runs actionlint on multiple lock files in a single batch. // This is more efficient than running actionlint once per file. func RunActionlintOnFiles(lockFiles []string, verbose bool, strict bool) error { - if len(lockFiles) == 0 { - return nil - } - return runActionlintOnFiles(lockFiles, verbose, strict) + return runBatchLockFileTool("actionlint", lockFiles, verbose, strict, runActionlintOnFiles) } // RunZizmorOnFiles runs zizmor on multiple lock files in a single batch. // This is more efficient than running zizmor once per file. func RunZizmorOnFiles(lockFiles []string, verbose bool, strict bool) error { - if len(lockFiles) == 0 { - return nil - } - return runZizmorOnFiles(lockFiles, verbose, strict) + return runBatchLockFileTool("zizmor", lockFiles, verbose, strict, runZizmorOnFiles) } // RunPoutineOnDirectory runs poutine security scanner once on a directory. @@ -89,44 +83,17 @@ func runBatchLockFileTool(toolName string, lockFiles []string, verbose bool, str return nil } -// runBatchActionlint runs actionlint on all lock files in batch -func runBatchActionlint(lockFiles []string, verbose bool, strict bool) error { - return runBatchLockFileTool("actionlint", lockFiles, verbose, strict, RunActionlintOnFiles) -} - -// runBatchZizmor runs zizmor security scanner on all lock files in batch -func runBatchZizmor(lockFiles []string, verbose bool, strict bool) error { - return runBatchLockFileTool("zizmor", lockFiles, verbose, strict, RunZizmorOnFiles) -} +// runBatchDirectoryTool runs a directory-based batch tool with uniform error handling +func runBatchDirectoryTool(toolName string, workflowDir string, verbose bool, strict bool, runner func(string, bool, bool) error) error { + compileBatchOperationsLog.Printf("Running batch %s on directory: %s", toolName, workflowDir) -// runBatchPoutine runs poutine security scanner once for the entire directory -func runBatchPoutine(workflowDir string, verbose bool, strict bool) error { - compileBatchOperationsLog.Printf("Running batch poutine on directory: %s", workflowDir) - - if err := RunPoutineOnDirectory(workflowDir, verbose, strict); err != nil { + if err := runner(workflowDir, verbose, strict); err != nil { if strict { - return fmt.Errorf("poutine security scan failed: %w", err) - } - // In non-strict mode, poutine errors are warnings - if verbose { - fmt.Fprintln(os.Stderr, console.FormatWarningMessage(fmt.Sprintf("poutine warnings: %v", err))) - } - } - - return nil -} - -// runBatchRunnerGuard runs runner-guard taint analysis scanner once for the entire directory -func runBatchRunnerGuard(workflowDir string, verbose bool, strict bool) error { - compileBatchOperationsLog.Printf("Running batch runner-guard on directory: %s", workflowDir) - - if err := RunRunnerGuardOnDirectory(workflowDir, verbose, strict); err != nil { - if strict { - return fmt.Errorf("runner-guard taint analysis failed: %w", err) + return fmt.Errorf("%s failed: %w", toolName, err) } - // In non-strict mode, runner-guard errors are warnings + // In non-strict mode, errors are warnings if verbose { - fmt.Fprintln(os.Stderr, console.FormatWarningMessage(fmt.Sprintf("runner-guard warnings: %v", err))) + fmt.Fprintln(os.Stderr, console.FormatWarningMessage(fmt.Sprintf("%s warnings: %v", toolName, err))) } } diff --git a/pkg/cli/compile_batch_operations_test.go b/pkg/cli/compile_batch_operations_test.go new file mode 100644 index 0000000000..c8fb39bb80 --- /dev/null +++ b/pkg/cli/compile_batch_operations_test.go @@ -0,0 +1,41 @@ +//go:build !integration + +package cli + +import ( + "errors" + "strings" + "testing" +) + +func TestRunActionlintOnFiles_EmptyList(t *testing.T) { + err := RunActionlintOnFiles(nil, false, false) + if err != nil { + t.Fatalf("expected nil error for empty lock file list, got %v", err) + } +} + +func TestRunBatchDirectoryTool_NonStrictSwallowsErrors(t *testing.T) { + runner := func(_ string, _ bool, _ bool) error { + return errors.New("boom") + } + + err := runBatchDirectoryTool("poutine", "/tmp/workflows", false, false, runner) + if err != nil { + t.Fatalf("expected nil error in non-strict mode, got %v", err) + } +} + +func TestRunBatchDirectoryTool_StrictWrapsErrors(t *testing.T) { + runner := func(_ string, _ bool, _ bool) error { + return errors.New("boom") + } + + err := runBatchDirectoryTool("runner-guard", "/tmp/workflows", false, true, runner) + if err == nil { + t.Fatal("expected error in strict mode, got nil") + } + if !strings.Contains(err.Error(), "runner-guard failed: boom") { + t.Fatalf("expected wrapped error message, got %q", err.Error()) + } +} diff --git a/pkg/cli/compile_config.go b/pkg/cli/compile_config.go index 32115979db..b986d858c1 100644 --- a/pkg/cli/compile_config.go +++ b/pkg/cli/compile_config.go @@ -35,22 +35,6 @@ type CompileConfig struct { PriorManifestFile string // Path to a JSON file containing pre-cached manifests (map[lockFile]*GHAWManifest) collected at MCP server startup; takes precedence over git HEAD / filesystem reads for safe update enforcement } -// WorkflowFailure represents a failed workflow with its error count -type WorkflowFailure struct { - Path string // File path of the workflow - ErrorCount int // Number of errors in this workflow - ErrorMessages []string // Actual error messages to display to the user -} - -// CompilationStats tracks the results of workflow compilation -type CompilationStats struct { - Total int - Errors int - Warnings int - FailedWorkflows []string // Names of workflows that failed compilation (deprecated, use FailedWorkflowDetails) - FailureDetails []WorkflowFailure // Detailed information about failed workflows -} - // CompileValidationError represents a single validation error or warning type CompileValidationError struct { Type string `json:"type"` diff --git a/pkg/cli/compile_file_operations.go b/pkg/cli/compile_file_operations.go index 171f0fedc5..b068a1cd62 100644 --- a/pkg/cli/compile_file_operations.go +++ b/pkg/cli/compile_file_operations.go @@ -78,6 +78,15 @@ func getRepositoryRelativePath(absPath string) (string, error) { return relPath, nil } +// getAbsoluteWorkflowDir converts a relative workflow dir to absolute path +func getAbsoluteWorkflowDir(workflowDir string, gitRoot string) string { + absWorkflowDir := workflowDir + if !filepath.IsAbs(absWorkflowDir) { + absWorkflowDir = filepath.Join(gitRoot, workflowDir) + } + return absWorkflowDir +} + // compileSingleFile compiles a single markdown workflow file and updates compilation statistics // If checkExists is true, the function will check if the file exists before compiling // Returns true if compilation was attempted (file exists or checkExists is false), false otherwise @@ -270,60 +279,3 @@ func handleFileDeleted(mdFile string, verbose bool) { } } } - -// trackWorkflowFailure adds a workflow failure to the compilation statistics -func trackWorkflowFailure(stats *CompilationStats, workflowPath string, errorCount int, errorMessages []string) { - // Add to FailedWorkflows for backward compatibility - stats.FailedWorkflows = append(stats.FailedWorkflows, filepath.Base(workflowPath)) - - // Add detailed failure information - stats.FailureDetails = append(stats.FailureDetails, WorkflowFailure{ - Path: workflowPath, - ErrorCount: errorCount, - ErrorMessages: errorMessages, - }) -} - -// printCompilationSummary prints a summary of the compilation results -func printCompilationSummary(stats *CompilationStats) { - if stats.Total == 0 { - return - } - - summary := fmt.Sprintf("Compiled %d workflow(s): %d error(s), %d warning(s)", - stats.Total, stats.Errors, stats.Warnings) - - // Use different formatting based on whether there were errors - if stats.Errors > 0 { - fmt.Fprintln(os.Stderr, console.FormatErrorMessage(summary)) - - // Show agent-friendly list of failed workflow IDs first - if len(stats.FailureDetails) > 0 { - fmt.Fprintln(os.Stderr) - fmt.Fprintln(os.Stderr, console.FormatErrorMessage("Failed workflows:")) - for _, failure := range stats.FailureDetails { - fmt.Fprintf(os.Stderr, " ✗ %s\n", filepath.Base(failure.Path)) - } - fmt.Fprintln(os.Stderr) - - // Display the actual error messages for each failed workflow - for _, failure := range stats.FailureDetails { - for _, errMsg := range failure.ErrorMessages { - fmt.Fprintln(os.Stderr, errMsg) - } - } - } else if len(stats.FailedWorkflows) > 0 { - // Fallback for backward compatibility if FailureDetails is not populated - fmt.Fprintln(os.Stderr) - fmt.Fprintln(os.Stderr, console.FormatErrorMessage("Failed workflows:")) - for _, workflow := range stats.FailedWorkflows { - fmt.Fprintf(os.Stderr, " ✗ %s\n", workflow) - } - fmt.Fprintln(os.Stderr) - } - } else if stats.Warnings > 0 { - fmt.Fprintln(os.Stderr, console.FormatWarningMessage(summary)) - } else { - fmt.Fprintln(os.Stderr, console.FormatSuccessMessage(summary)) - } -} diff --git a/pkg/cli/compile_infrastructure.go b/pkg/cli/compile_infrastructure.go new file mode 100644 index 0000000000..fb767b61a9 --- /dev/null +++ b/pkg/cli/compile_infrastructure.go @@ -0,0 +1,68 @@ +package cli + +import ( + "fmt" + "os" + + "github.com/github/gh-aw/pkg/console" + "github.com/github/gh-aw/pkg/logger" + "github.com/github/gh-aw/pkg/workflow" +) + +var compileInfrastructureLog = logger.New("cli:compile_infrastructure") + +// updateGitAttributes ensures .gitattributes marks .lock.yml files as generated +func updateGitAttributes(successCount int, actionCache *workflow.ActionCache, verbose bool) error { + compileInfrastructureLog.Printf("Updating .gitattributes (compiled=%d, actionCache=%v)", successCount, actionCache != nil) + + hasActionCacheEntries := actionCache != nil && len(actionCache.Entries) > 0 + + // Only update if we successfully compiled workflows or have action cache entries + if successCount > 0 || hasActionCacheEntries { + compileInfrastructureLog.Printf("Updating .gitattributes (compiled=%d, actionCache=%v)", successCount, hasActionCacheEntries) + updated, err := ensureGitAttributes() + if err != nil { + compileInfrastructureLog.Printf("Failed to update .gitattributes: %v", err) + if verbose { + fmt.Fprintln(os.Stderr, console.FormatWarningMessage(fmt.Sprintf("Failed to update .gitattributes: %v", err))) + } + return err + } + if updated { + compileInfrastructureLog.Printf("Successfully updated .gitattributes") + if verbose { + fmt.Fprintln(os.Stderr, console.FormatSuccessMessage("Updated .gitattributes to mark .lock.yml files as generated")) + } + } else { + compileInfrastructureLog.Print(".gitattributes already up to date") + } + } else { + compileInfrastructureLog.Print("Skipping .gitattributes update (no compiled workflows and no action cache entries)") + } + + return nil +} + +// saveActionCache saves the action cache after all compilations +func saveActionCache(actionCache *workflow.ActionCache, verbose bool) error { + if actionCache == nil { + return nil + } + + compileInfrastructureLog.Print("Saving action cache") + + if err := actionCache.Save(); err != nil { + compileInfrastructureLog.Printf("Failed to save action cache: %v", err) + if verbose { + fmt.Fprintln(os.Stderr, console.FormatWarningMessage(fmt.Sprintf("Failed to save action cache: %v", err))) + } + return err + } + + compileInfrastructureLog.Print("Action cache saved successfully") + if verbose { + fmt.Fprintln(os.Stderr, console.FormatSuccessMessage("Action cache saved to "+actionCache.GetCachePath())) + } + + return nil +} diff --git a/pkg/cli/compile_pipeline.go b/pkg/cli/compile_pipeline.go index b72e0f8d8f..844f8b0c1d 100644 --- a/pkg/cli/compile_pipeline.go +++ b/pkg/cli/compile_pipeline.go @@ -135,7 +135,7 @@ func compileSpecificFiles( // Run batch actionlint on all collected lock files if config.Actionlint && !config.NoEmit && len(lockFilesForActionlint) > 0 { - if err := runBatchActionlint(lockFilesForActionlint, config.Verbose && !config.JSONOutput, config.Strict); err != nil { + if err := RunActionlintOnFiles(lockFilesForActionlint, config.Verbose && !config.JSONOutput, config.Strict); err != nil { if config.Strict { return workflowDataList, err } @@ -144,7 +144,7 @@ func compileSpecificFiles( // Run batch zizmor on all collected lock files if config.Zizmor && !config.NoEmit && len(lockFilesForZizmor) > 0 { - if err := runBatchZizmor(lockFilesForZizmor, config.Verbose && !config.JSONOutput, config.Strict); err != nil { + if err := RunZizmorOnFiles(lockFilesForZizmor, config.Verbose && !config.JSONOutput, config.Strict); err != nil { if config.Strict { return workflowDataList, err } @@ -155,7 +155,7 @@ func compileSpecificFiles( // Get the directory from the first lock file (all should be in same directory) if config.Poutine && !config.NoEmit && len(lockFilesForDirTools) > 0 { workflowDir := filepath.Dir(lockFilesForDirTools[0]) - if err := runBatchPoutine(workflowDir, config.Verbose && !config.JSONOutput, config.Strict); err != nil { + if err := runBatchDirectoryTool("poutine", workflowDir, config.Verbose && !config.JSONOutput, config.Strict, RunPoutineOnDirectory); err != nil { if config.Strict { return workflowDataList, err } @@ -166,7 +166,7 @@ func compileSpecificFiles( // Get the directory from the first lock file (all should be in same directory) if config.RunnerGuard && !config.NoEmit && len(lockFilesForDirTools) > 0 { workflowDir := filepath.Dir(lockFilesForDirTools[0]) - if err := runBatchRunnerGuard(workflowDir, config.Verbose && !config.JSONOutput, config.Strict); err != nil { + if err := runBatchDirectoryTool("runner-guard", workflowDir, config.Verbose && !config.JSONOutput, config.Strict, RunRunnerGuardOnDirectory); err != nil { if config.Strict { return workflowDataList, err } @@ -312,7 +312,7 @@ func compileAllFilesInDirectory( // Run batch actionlint if config.Actionlint && !config.NoEmit && len(lockFilesForActionlint) > 0 { - if err := runBatchActionlint(lockFilesForActionlint, config.Verbose && !config.JSONOutput, config.Strict); err != nil { + if err := RunActionlintOnFiles(lockFilesForActionlint, config.Verbose && !config.JSONOutput, config.Strict); err != nil { if config.Strict { return workflowDataList, err } @@ -321,7 +321,7 @@ func compileAllFilesInDirectory( // Run batch zizmor if config.Zizmor && !config.NoEmit && len(lockFilesForZizmor) > 0 { - if err := runBatchZizmor(lockFilesForZizmor, config.Verbose && !config.JSONOutput, config.Strict); err != nil { + if err := RunZizmorOnFiles(lockFilesForZizmor, config.Verbose && !config.JSONOutput, config.Strict); err != nil { if config.Strict { return workflowDataList, err } @@ -330,7 +330,7 @@ func compileAllFilesInDirectory( // Run batch poutine once on the workflow directory if config.Poutine && !config.NoEmit && len(lockFilesForDirTools) > 0 { - if err := runBatchPoutine(workflowsDir, config.Verbose && !config.JSONOutput, config.Strict); err != nil { + if err := runBatchDirectoryTool("poutine", workflowsDir, config.Verbose && !config.JSONOutput, config.Strict, RunPoutineOnDirectory); err != nil { if config.Strict { return workflowDataList, err } @@ -339,7 +339,7 @@ func compileAllFilesInDirectory( // Run batch runner-guard once on the workflow directory if config.RunnerGuard && !config.NoEmit && len(lockFilesForDirTools) > 0 { - if err := runBatchRunnerGuard(workflowsDir, config.Verbose && !config.JSONOutput, config.Strict); err != nil { + if err := runBatchDirectoryTool("runner-guard", workflowsDir, config.Verbose && !config.JSONOutput, config.Strict, RunRunnerGuardOnDirectory); err != nil { if config.Strict { return workflowDataList, err } diff --git a/pkg/cli/compile_post_processing.go b/pkg/cli/compile_post_processing.go index f89448d784..20e7473e0b 100644 --- a/pkg/cli/compile_post_processing.go +++ b/pkg/cli/compile_post_processing.go @@ -17,9 +17,6 @@ // - generateDependabotManifestsWrapper() - Generate Dependabot manifests // - generateMaintenanceWorkflowWrapper() - Generate maintenance workflow // -// Statistics: -// - collectWorkflowStatisticsWrapper() - Collect workflow statistics -// // These functions abstract post-processing operations, allowing the main compile // orchestrator to focus on coordination while these handle generation and validation. @@ -28,11 +25,9 @@ package cli import ( "fmt" "os" - "path/filepath" "github.com/github/gh-aw/pkg/console" "github.com/github/gh-aw/pkg/logger" - "github.com/github/gh-aw/pkg/stringutil" "github.com/github/gh-aw/pkg/workflow" ) @@ -90,88 +85,3 @@ func generateMaintenanceWorkflowWrapper( return nil } - -// collectWorkflowStatisticsWrapper collects and returns workflow statistics -func collectWorkflowStatisticsWrapper(markdownFiles []string) []*WorkflowStats { - compilePostProcessingLog.Printf("Collecting workflow statistics for %d files", len(markdownFiles)) - - var statsList []*WorkflowStats - for _, file := range markdownFiles { - resolvedFile, err := resolveWorkflowFile(file, false) - if err != nil { - continue // Skip files that couldn't be resolved - } - lockFile := stringutil.MarkdownToLockFile(resolvedFile) - if workflowStats, err := collectWorkflowStats(lockFile); err == nil { - statsList = append(statsList, workflowStats) - } - } - - compilePostProcessingLog.Printf("Collected statistics for %d workflows", len(statsList)) - return statsList -} - -// updateGitAttributes ensures .gitattributes marks .lock.yml files as generated -func updateGitAttributes(successCount int, actionCache *workflow.ActionCache, verbose bool) error { - compilePostProcessingLog.Printf("Updating .gitattributes (compiled=%d, actionCache=%v)", successCount, actionCache != nil) - - hasActionCacheEntries := actionCache != nil && len(actionCache.Entries) > 0 - - // Only update if we successfully compiled workflows or have action cache entries - if successCount > 0 || hasActionCacheEntries { - compilePostProcessingLog.Printf("Updating .gitattributes (compiled=%d, actionCache=%v)", successCount, hasActionCacheEntries) - updated, err := ensureGitAttributes() - if err != nil { - compilePostProcessingLog.Printf("Failed to update .gitattributes: %v", err) - if verbose { - fmt.Fprintln(os.Stderr, console.FormatWarningMessage(fmt.Sprintf("Failed to update .gitattributes: %v", err))) - } - return err - } - if updated { - compilePostProcessingLog.Printf("Successfully updated .gitattributes") - if verbose { - fmt.Fprintln(os.Stderr, console.FormatSuccessMessage("Updated .gitattributes to mark .lock.yml files as generated")) - } - } else { - compilePostProcessingLog.Print(".gitattributes already up to date") - } - } else { - compilePostProcessingLog.Print("Skipping .gitattributes update (no compiled workflows and no action cache entries)") - } - - return nil -} - -// saveActionCache saves the action cache after all compilations -func saveActionCache(actionCache *workflow.ActionCache, verbose bool) error { - if actionCache == nil { - return nil - } - - compilePostProcessingLog.Print("Saving action cache") - - if err := actionCache.Save(); err != nil { - compilePostProcessingLog.Printf("Failed to save action cache: %v", err) - if verbose { - fmt.Fprintln(os.Stderr, console.FormatWarningMessage(fmt.Sprintf("Failed to save action cache: %v", err))) - } - return err - } - - compilePostProcessingLog.Print("Action cache saved successfully") - if verbose { - fmt.Fprintln(os.Stderr, console.FormatSuccessMessage("Action cache saved to "+actionCache.GetCachePath())) - } - - return nil -} - -// getAbsoluteWorkflowDir converts a relative workflow dir to absolute path -func getAbsoluteWorkflowDir(workflowDir string, gitRoot string) string { - absWorkflowDir := workflowDir - if !filepath.IsAbs(absWorkflowDir) { - absWorkflowDir = filepath.Join(gitRoot, workflowDir) - } - return absWorkflowDir -} diff --git a/pkg/cli/compile_stats.go b/pkg/cli/compile_stats.go index 722b7f6ea7..c5e0c48de5 100644 --- a/pkg/cli/compile_stats.go +++ b/pkg/cli/compile_stats.go @@ -9,6 +9,7 @@ import ( "github.com/github/gh-aw/pkg/console" "github.com/github/gh-aw/pkg/logger" + "github.com/github/gh-aw/pkg/stringutil" "github.com/github/gh-aw/pkg/styles" "github.com/github/gh-aw/pkg/tty" "github.com/goccy/go-yaml" @@ -16,6 +17,22 @@ import ( var compileStatsLog = logger.New("cli:compile_stats") +// WorkflowFailure represents a failed workflow with its error count +type WorkflowFailure struct { + Path string // File path of the workflow + ErrorCount int // Number of errors in this workflow + ErrorMessages []string // Actual error messages to display to the user +} + +// CompilationStats tracks the results of workflow compilation +type CompilationStats struct { + Total int + Errors int + Warnings int + FailedWorkflows []string // Names of workflows that failed compilation (deprecated, use FailedWorkflowDetails) + FailureDetails []WorkflowFailure // Detailed information about failed workflows +} + // WorkflowStats holds statistics about a compiled workflow type WorkflowStats struct { Workflow string @@ -91,6 +108,83 @@ func collectWorkflowStats(lockFilePath string) (*WorkflowStats, error) { return stats, nil } +// trackWorkflowFailure adds a workflow failure to the compilation statistics +func trackWorkflowFailure(stats *CompilationStats, workflowPath string, errorCount int, errorMessages []string) { + // Add to FailedWorkflows for backward compatibility + stats.FailedWorkflows = append(stats.FailedWorkflows, filepath.Base(workflowPath)) + + // Add detailed failure information + stats.FailureDetails = append(stats.FailureDetails, WorkflowFailure{ + Path: workflowPath, + ErrorCount: errorCount, + ErrorMessages: errorMessages, + }) +} + +// printCompilationSummary prints a summary of the compilation results +func printCompilationSummary(stats *CompilationStats) { + if stats.Total == 0 { + return + } + + summary := fmt.Sprintf("Compiled %d workflow(s): %d error(s), %d warning(s)", + stats.Total, stats.Errors, stats.Warnings) + + // Use different formatting based on whether there were errors + if stats.Errors > 0 { + fmt.Fprintln(os.Stderr, console.FormatErrorMessage(summary)) + + // Show agent-friendly list of failed workflow IDs first + if len(stats.FailureDetails) > 0 { + fmt.Fprintln(os.Stderr) + fmt.Fprintln(os.Stderr, console.FormatErrorMessage("Failed workflows:")) + for _, failure := range stats.FailureDetails { + fmt.Fprintf(os.Stderr, " ✗ %s\n", filepath.Base(failure.Path)) + } + fmt.Fprintln(os.Stderr) + + // Display the actual error messages for each failed workflow + for _, failure := range stats.FailureDetails { + for _, errMsg := range failure.ErrorMessages { + fmt.Fprintln(os.Stderr, errMsg) + } + } + } else if len(stats.FailedWorkflows) > 0 { + // Fallback for backward compatibility if FailureDetails is not populated + fmt.Fprintln(os.Stderr) + fmt.Fprintln(os.Stderr, console.FormatErrorMessage("Failed workflows:")) + for _, workflow := range stats.FailedWorkflows { + fmt.Fprintf(os.Stderr, " ✗ %s\n", workflow) + } + fmt.Fprintln(os.Stderr) + } + } else if stats.Warnings > 0 { + fmt.Fprintln(os.Stderr, console.FormatWarningMessage(summary)) + } else { + fmt.Fprintln(os.Stderr, console.FormatSuccessMessage(summary)) + } +} + +// collectWorkflowStatisticsWrapper collects and returns workflow statistics +func collectWorkflowStatisticsWrapper(markdownFiles []string) []*WorkflowStats { + compileStatsLog.Printf("Collecting workflow statistics for %d files", len(markdownFiles)) + + var statsList []*WorkflowStats + for _, file := range markdownFiles { + resolvedFile, err := resolveWorkflowFile(file, false) + if err != nil { + continue // Skip files that couldn't be resolved + } + lockFile := stringutil.MarkdownToLockFile(resolvedFile) + if workflowStats, err := collectWorkflowStats(lockFile); err == nil { + statsList = append(statsList, workflowStats) + } + } + + compileStatsLog.Printf("Collected statistics for %d workflows", len(statsList)) + return statsList +} + // displayStatsTable displays workflow statistics in a sorted table func displayStatsTable(statsList []*WorkflowStats) { if len(statsList) == 0 {