You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Analysis Date: 2026-04-13 Triggered by: @pelikhan Scope: 190 total workflows, 92 explicitly using engine: copilot (~128 effective, including implicit defaults) Previous Analysis: 2026-04-12 (run §24316330360)
📊 Executive Summary
This is the third consecutive daily analysis of Copilot CLI feature adoption in this repository. The repository has grown from 186 → 190 workflows (+4), with 92 now explicitly specifying engine: copilot (+3 from last run). Feature adoption patterns are remarkably stable, indicating that most workflows were created with a consistent template — but that template doesn't leverage several high-value Copilot-only capabilities.
Key findings this run:
max-continuations (Copilot's unique autopilot mode) remains at 2% adoption — unchanged for 3 days
0 workflows pin a specific Copilot CLI version — leaving all 92+ at risk from unexpected CLI breaking changes
21 workflows specify toolsets: without a value, leaving GitHub MCP permission scope ambiguous
8 of 11 custom agent files in .github/agents/ are never referenced via engine.agent
bare: true and mcp-scripts remain near-zero despite strong use cases in this repo
🔴 Critical Findings
High Priority: Unspecified GitHub Toolsets (21 workflows)
Twenty-one workflows declare github: with toolsets: but no list value — the most common pattern is:
tools:
github:
toolsets:
This leaves the granted permission scope implicit. Every GitHub API workflow should name exactly what it needs.
High Priority: toolsets: [all] in 3 Workflows
Three workflows grant every GitHub MCP tool to the agent. The principle of least privilege applies here — each should be narrowed to the specific toolsets actually needed.
Medium Priority: No Version Pinning (0 of 92 workflows)
No Copilot workflow pins a version. A single bad Copilot CLI release could silently break all 92 workflows simultaneously.
1️⃣ Current State Analysis
View Copilot CLI Capabilities Inventory
Available Features (from codebase inspection)
Engine Configuration (engine: object)
Option
Description
Usage
version
Pin CLI version (e.g., "0.0.422")
0%
model
Override AI model (e.g., gpt-5)
7%
agent
Custom agent file from .github/agents/
4%
command
Custom executable path
~0%
args
Extra CLI arguments (e.g., ["--viewport-size", "1920x1080"])
1 workflow
env
Custom env vars passed to Copilot CLI
~0%
api-target
GHEC/GHES custom endpoint hostname
0%
bare
--no-custom-instructions (ignore AGENTS.md)
2%
max-continuations
Autopilot with multiple consecutive runs
2%
Copilot-Unique Runtime Features
Feature
Description
Usage
--disable-builtin-mcps
Disable built-in MCP servers (always applied)
✅ auto
--no-ask-user
Fully autonomous (no interactive prompts)
✅ auto (v1.0.19+)
--autopilot --max-autopilot-continues
Autopilot mode
2 workflows
--no-custom-instructions
Ignore AGENTS.md
2 workflows via bare:
--allow-all-paths
Allow write on all paths (auto when edit:)
✅ auto
--allow-all-tools
Allow all tools (auto when bash: ["*"])
auto
Driver script retry logic
copilot_driver.cjs wraps invocation for transient errors
✅ auto
Sandbox Modes
Mode
Description
Usage
sandbox: { agent: awf }
AWF network firewall
14% (13 workflows)
sandbox: { agent.mounts: [...] }
Custom AWF bind mounts
1 workflow
sandbox: { agent.memory: ... }
AWF container memory limits
0 workflows
Available Custom Agent Files (.github/agents/)
agentic-workflows.agent.md — unused
adr-writer.agent.md — unused
ci-cleaner.agent.md — used by 1 workflow (hourly-ci-cleaner.md)
technical-doc-writer.agent.md — used by 2 workflows
w3c-specification-writer.agent.md — unused
View Usage Statistics
Usage Statistics
Metric
Count
%
Total workflows
190
—
engine: copilot (explicit)
92
48%
engine: copilot (effective, incl. defaults)
~128
67%
Using cache-memory
77
41%
Using safe-outputs
154
81%
Using strict: true
110
58%
Using tracker-id
65
34%
Using sandbox: awf
13
7%
Using web-fetch
16
8%
Using playwright
11
6%
Using copilot-requests feature flag
45
24%
Using repo-memory
19
10%
Using rate-limit
3
2%
GitHub MCP Toolset Distribution (136 workflows with github:)
Toolset Config
Count
Notes
toolsets: [default]
47
Most common — may be overly broad
toolsets: (blank)
21
Ambiguous — needs explicit value
toolsets: [default, discussions]
10
Good specificity
toolsets: [default, issues]
4
Good
toolsets: [default, actions]
4
Good
toolsets: [all]
3
Over-permissive
toolsets: [repos, pull_requests]
5
Well scoped
Other specific combos
42
Good variety
2️⃣ Feature Usage Matrix
Feature
Available
Used
Not Used
Adoption
max-continuations
✅ Copilot-only
2
90
2%
engine.version
✅
0
92
0%
engine.model
✅
6
86
7%
engine.agent
✅ Copilot-only
4+13 AWF
75
18%
engine.bare
✅ Copilot-only
2
90
2%
engine.args
✅
1
91
1%
engine.env
✅
0*
92
~0%
engine.api-target
✅
0
92
0%
sandbox.agent.mounts
✅
1
12
8%
sandbox.agent.memory
✅
0
13
0%
mcp-scripts
✅
1
189
1%
rate-limit
✅
3
187
2%
toolsets: [all] (avoid)
—
3
—
—
toolsets: (blank) (fix)
—
21
—
—
3️⃣ Missed Opportunities
🔴 High Priority Opportunities
1. Unspecified Toolsets (21 workflows)
What: 21 workflows have toolsets: without a value. This is ambiguous and potentially grants more permissions than intended. Why it matters: Principle of least privilege — agents should only have access to the GitHub MCP tools they actually use. How to fix: Replace toolsets: with an explicit list like toolsets: [repos] or toolsets: [default].
# Before (ambiguous)tools:
github:
toolsets:
# After (explicit)tools:
github:
toolsets: [repos, pull_requests]
2. toolsets: [all] in Security-Sensitive Contexts
What: 3 workflows grant full GitHub MCP access: github-mcp-structural-analysis.md, github-mcp-tools-report.md, security-review.md. Why it matters: These grant access to code scanning, secret scanning, and potentially write operations. How to fix: Scope to the actual toolsets needed.
# Likely sufficient for most analysis workflowstools:
github:
toolsets: [repos, issues, pull_requests, discussions]
3. No Version Pinning (0 of 92 workflows)
What: Every Copilot workflow installs latest — a breaking change in the CLI would affect all 92 workflows simultaneously. Why it matters: Long-running agentic workflows are particularly vulnerable because a mid-run CLI update could cause inconsistent behavior. How to fix: Use the extended engine config to pin critical workflows:
engine:
id: copilotversion: "1.0.21"# or current latest
At minimum, pin the most resource-intensive or business-critical workflows.
🟡 Medium Priority Opportunities
4. max-continuations Severely Underused (2 of 92)
What: Copilot CLI supports --autopilot mode which enables the agent to continue working across multiple runs. Only smoke-copilot.md (40 continuations) and test-quality-sentinel.md (2) use this. Why it matters: This is a Copilot-exclusive feature unavailable in Claude, Codex, or Gemini. Complex analytical workflows (agent-performance-analyzer.md, agentic-observability-kit.md, daily-repo-chronicle.md) could benefit enormously. Example candidates:
agent-performance-analyzer.md — analyzing many workflow runs could use autopilot
5. Custom Agent Files: 8 of 11 Unused (via engine.agent)
What: The repository has 11 custom agent files in .github/agents/, but only ci-cleaner and technical-doc-writer are used via engine.agent. Notable unused agents:
contribution-checker.agent.md — perfect for PR review workflows
adr-writer.agent.md — could power any architectural documentation workflow
grumpy-reviewer.agent.md — could raise code review quality
developer.instructions.md — could standardize all dev-focused workflows
What: bare: true passes --no-custom-instructions to Copilot CLI, preventing automatic loading of AGENTS.md and user-level configs. Only smoke-copilot.md and smoke-claude.md use it. Why it matters: For focused/narrow-scope workflows, loading the full AGENTS.md (which is large in this repo) adds context overhead and can confuse the agent with irrelevant instructions. Good candidates: Utility workflows like firewall.md, mcp-inspector.md, artifacts-summary.md.
7. AWF Sandbox: memory Limits Never Set (0 of 13 AWF workflows)
What: The AWF sandbox supports memory limits via sandbox.agent.memory but none of the 13 AWF workflows set a limit. Why it matters: Runaway agents can consume all runner memory, causing OOM kills that are hard to debug. Example (from hourly-ci-cleaner which does use mounts):
8. engine.model Selection for Complex Workflows (7% adoption)
What: Only 6 workflows explicitly select a model. The others inherit the org-level GH_AW_MODEL_AGENT_COPILOT variable or the engine default. Why it matters: Different models have different strengths. Complex reasoning tasks (architecture review, security analysis) benefit from more capable models. Good candidates: architecture-guardian.md, security-review.md, code-scanning-fixer.md.
engine:
id: copilotmodel: gpt-5 # Use most capable model for security review
9. mcp-scripts Nearly Unused (1 of 190 workflows)
What: The mcp-scripts feature enables custom MCP servers implemented as scripts. Only 1 workflow uses it. Why it matters: Several workflows implement complex GitHub API queries in the prompt — these could be more reliably implemented as MCP scripts. Example: Workflows that construct large jq commands or complex gh API calls could benefit from a dedicated MCP script.
mcp-scripts:
github-queries:
path: scripts/github-queries.cjsdescription: "Custom GitHub API query helpers"
10. engine.env for Runtime Configuration (0 of 92 workflows)
What: The engine.env block passes custom environment variables to the Copilot CLI process. No workflow uses this. Why it matters: Enables dynamic configuration (e.g., debug flags, feature toggles, custom endpoints) without modifying the workflow file. Use cases: Enabling DEBUG=* for specific runs, passing custom API endpoints, toggling experimental features.
Current: No model override, no max-continuations, toolsets: [default, discussions]
Recommended: Add max-continuations: 5 to allow comprehensive multi-run analysis; consider pinning model: gpt-5 for complex data analysis
Expected benefit: More thorough analysis across larger datasets without hitting single-run limits
architecture-guardian.md
Current: No model override, broad github: access
Recommended: Add engine.model: gpt-5, scope toolsets to [repos] since it primarily reads code, add bare: true to avoid architectural instruction noise
Expected benefit: Faster execution, more focused architectural analysis
daily-repo-chronicle.md
Current: AWF sandbox, bash: ["*"], edit:
Recommended: Add max-continuations: 3 to enable multi-phase report generation; consider engine.agent: developer for specialized dev context
Expected benefit: More comprehensive reports through autonomous multi-phase execution
Workflows with toolsets: (blank) — audit needed
Files like weekly-issue-summary.md, sub-issue-closer.md, static-analysis-report.md, stale-repo-identifier.md, weekly-blog-post-writer.md, and 16 others should specify explicit toolsets.
security-review.md
Current: toolsets: [all]
Recommended: Scope to [repos, code_security, pull_requests] — security review doesn't need discussions or issue creation tools
Expected benefit: Reduced blast radius if prompt injection occurs
5️⃣ Trends & Historical Comparison
View Trend Analysis (3-Day History)
Feature
2026-04-11
2026-04-12
2026-04-13
Trend
Total workflows
~183
186
190
↑ growing
Copilot workflows
~86
89
92
↑ growing
max-continuations
2%
2%
2%
→ stagnant
engine.model override
8%
7%
7%
→ stagnant
engine.agent (custom)
6%
3%
4%
↑ slight recovery
AWF sandbox
18%
16%
14%
↓ declining %
web-fetch
19%
22%
17%
↓ declining %
playwright
13%
17%
12%
↓ declining %
copilot-requests flag
—
48%
49%
→ stable
strict: true
—
66%
58%
↓ declining
Notable observations:
The AWF sandbox adoption rate is declining (18% → 16% → 14%) as new workflows are added without it. This suggests the "secure by default" approach isn't being applied consistently to new workflows.
strict: true declining (66% → 58%) — new workflows may not be following the strict mode recommendation.
Playwright and web-fetch percentages dropping despite absolute counts being similar — new workflows don't include browser/web capabilities by default.
The core engine features (max-continuations, bare, mcp-scripts) show no improvement across 3 days, suggesting these need better documentation or template adoption.
6️⃣ Best Practice Guidelines
Based on this research, here are recommended best practices for new Copilot workflows:
Always specify toolsets explicitly: Never leave toolsets: blank. Choose the minimal set of GitHub MCP toolsets needed (repos, issues, pull_requests, discussions, actions, code_security).
Pin version for critical workflows: Add version: "latest-known-good" to workflows that run on critical schedules or that are expensive to re-run.
Use max-continuations for complex analysis: Any workflow that needs to "do a lot of work" (analyze many files, generate comprehensive reports) is a candidate for max-continuations: 3-10.
Leverage custom agent files: The 8 unused agent files in .github/agents/ represent significant invested work. Match them to appropriate workflows via engine.agent.
Apply bare: true to focused utilities: Workflows that don't need development context (e.g., data analysis, reporting, simple monitoring) should use bare: true for faster startup and cleaner execution.
Enable AWF sandbox for new workflows with network access: The declining AWF adoption rate suggests new workflows aren't defaulting to sandbox mode. Workflows that use web-fetch or playwright particularly benefit.
Set memory limits for AWF containers: Add sandbox.agent.memory: "4g" to prevent runaway agent processes from consuming all runner memory.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Analysis Date: 2026-04-13
Triggered by:
@pelikhanScope: 190 total workflows, 92 explicitly using
engine: copilot(~128 effective, including implicit defaults)Previous Analysis: 2026-04-12 (run §24316330360)
📊 Executive Summary
This is the third consecutive daily analysis of Copilot CLI feature adoption in this repository. The repository has grown from 186 → 190 workflows (+4), with 92 now explicitly specifying
engine: copilot(+3 from last run). Feature adoption patterns are remarkably stable, indicating that most workflows were created with a consistent template — but that template doesn't leverage several high-value Copilot-only capabilities.Key findings this run:
max-continuations(Copilot's unique autopilot mode) remains at 2% adoption — unchanged for 3 daysversion— leaving all 92+ at risk from unexpected CLI breaking changestoolsets:without a value, leaving GitHub MCP permission scope ambiguous.github/agents/are never referenced viaengine.agentbare: trueandmcp-scriptsremain near-zero despite strong use cases in this repo🔴 Critical Findings
High Priority: Unspecified GitHub Toolsets (21 workflows)
Twenty-one workflows declare
github:withtoolsets:but no list value — the most common pattern is:This leaves the granted permission scope implicit. Every GitHub API workflow should name exactly what it needs.
High Priority:
toolsets: [all]in 3 WorkflowsThree workflows grant every GitHub MCP tool to the agent. The principle of least privilege applies here — each should be narrowed to the specific toolsets actually needed.
Medium Priority: No Version Pinning (0 of 92 workflows)
No Copilot workflow pins a version. A single bad Copilot CLI release could silently break all 92 workflows simultaneously.
1️⃣ Current State Analysis
View Copilot CLI Capabilities Inventory
Available Features (from codebase inspection)
Engine Configuration (
engine:object)version"0.0.422")modelgpt-5)agent.github/agents/commandargs["--viewport-size", "1920x1080"])envapi-targetbare--no-custom-instructions(ignore AGENTS.md)max-continuationsCopilot-Unique Runtime Features
--disable-builtin-mcps--no-ask-user--autopilot --max-autopilot-continues--no-custom-instructionsbare:--allow-all-pathsedit:)--allow-all-toolsbash: ["*"])copilot_driver.cjswraps invocation for transient errorsSandbox Modes
sandbox: { agent: awf }sandbox: { agent.mounts: [...] }sandbox: { agent.memory: ... }Available Custom Agent Files (
.github/agents/)agentic-workflows.agent.md— unusedadr-writer.agent.md— unusedci-cleaner.agent.md— used by 1 workflow (hourly-ci-cleaner.md)contribution-checker.agent.md— unusedcreate-safe-output-type.agent.md— unusedcustom-engine-implementation.agent.md— unuseddeveloper.instructions.md— unused (via engine.agent)grumpy-reviewer.agent.md— unusedinteractive-agent-designer.agent.md— unusedtechnical-doc-writer.agent.md— used by 2 workflowsw3c-specification-writer.agent.md— unusedView Usage Statistics
Usage Statistics
engine: copilot(explicit)engine: copilot(effective, incl. defaults)cache-memorysafe-outputsstrict: truetracker-idsandbox: awfweb-fetchplaywrightcopilot-requestsfeature flagrepo-memoryrate-limitGitHub MCP Toolset Distribution (136 workflows with
github:)toolsets: [default]toolsets:(blank)toolsets: [default, discussions]toolsets: [default, issues]toolsets: [default, actions]toolsets: [all]toolsets: [repos, pull_requests]2️⃣ Feature Usage Matrix
max-continuationsengine.versionengine.modelengine.agentengine.bareengine.argsengine.envengine.api-targetsandbox.agent.mountssandbox.agent.memorymcp-scriptsrate-limittoolsets: [all](avoid)toolsets: (blank)(fix)3️⃣ Missed Opportunities
🔴 High Priority Opportunities
1. Unspecified Toolsets (21 workflows)
What: 21 workflows have
toolsets:without a value. This is ambiguous and potentially grants more permissions than intended.Why it matters: Principle of least privilege — agents should only have access to the GitHub MCP tools they actually use.
How to fix: Replace
toolsets:with an explicit list liketoolsets: [repos]ortoolsets: [default].2.
toolsets: [all]in Security-Sensitive ContextsWhat: 3 workflows grant full GitHub MCP access:
github-mcp-structural-analysis.md,github-mcp-tools-report.md,security-review.md.Why it matters: These grant access to code scanning, secret scanning, and potentially write operations.
How to fix: Scope to the actual toolsets needed.
3. No Version Pinning (0 of 92 workflows)
What: Every Copilot workflow installs
latest— a breaking change in the CLI would affect all 92 workflows simultaneously.Why it matters: Long-running agentic workflows are particularly vulnerable because a mid-run CLI update could cause inconsistent behavior.
How to fix: Use the extended engine config to pin critical workflows:
At minimum, pin the most resource-intensive or business-critical workflows.
🟡 Medium Priority Opportunities
4.
max-continuationsSeverely Underused (2 of 92)What: Copilot CLI supports
--autopilotmode which enables the agent to continue working across multiple runs. Onlysmoke-copilot.md(40 continuations) andtest-quality-sentinel.md(2) use this.Why it matters: This is a Copilot-exclusive feature unavailable in Claude, Codex, or Gemini. Complex analytical workflows (
agent-performance-analyzer.md,agentic-observability-kit.md,daily-repo-chronicle.md) could benefit enormously.Example candidates:
agent-performance-analyzer.md— analyzing many workflow runs could use autopilotdaily-repo-chronicle.md— comprehensive daily report generationarchitecture-guardian.md— multi-file architectural review5. Custom Agent Files: 8 of 11 Unused (via
engine.agent)What: The repository has 11 custom agent files in
.github/agents/, but onlyci-cleanerandtechnical-doc-writerare used viaengine.agent.Notable unused agents:
contribution-checker.agent.md— perfect for PR review workflowsadr-writer.agent.md— could power any architectural documentation workflowgrumpy-reviewer.agent.md— could raise code review qualitydeveloper.instructions.md— could standardize all dev-focused workflows6.
bare: trueUnderused (2 of 92)What:
bare: truepasses--no-custom-instructionsto Copilot CLI, preventing automatic loading ofAGENTS.mdand user-level configs. Onlysmoke-copilot.mdandsmoke-claude.mduse it.Why it matters: For focused/narrow-scope workflows, loading the full
AGENTS.md(which is large in this repo) adds context overhead and can confuse the agent with irrelevant instructions.Good candidates: Utility workflows like
firewall.md,mcp-inspector.md,artifacts-summary.md.7. AWF Sandbox:
memoryLimits Never Set (0 of 13 AWF workflows)What: The AWF sandbox supports memory limits via
sandbox.agent.memorybut none of the 13 AWF workflows set a limit.Why it matters: Runaway agents can consume all runner memory, causing OOM kills that are hard to debug.
Example (from hourly-ci-cleaner which does use mounts):
🟢 Low Priority Opportunities
8.
engine.modelSelection for Complex Workflows (7% adoption)What: Only 6 workflows explicitly select a model. The others inherit the org-level
GH_AW_MODEL_AGENT_COPILOTvariable or the engine default.Why it matters: Different models have different strengths. Complex reasoning tasks (architecture review, security analysis) benefit from more capable models.
Good candidates:
architecture-guardian.md,security-review.md,code-scanning-fixer.md.9.
mcp-scriptsNearly Unused (1 of 190 workflows)What: The
mcp-scriptsfeature enables custom MCP servers implemented as scripts. Only 1 workflow uses it.Why it matters: Several workflows implement complex GitHub API queries in the prompt — these could be more reliably implemented as MCP scripts.
Example: Workflows that construct large
jqcommands or complexghAPI calls could benefit from a dedicated MCP script.10.
engine.envfor Runtime Configuration (0 of 92 workflows)What: The
engine.envblock passes custom environment variables to the Copilot CLI process. No workflow uses this.Why it matters: Enables dynamic configuration (e.g., debug flags, feature toggles, custom endpoints) without modifying the workflow file.
Use cases: Enabling
DEBUG=*for specific runs, passing custom API endpoints, toggling experimental features.4️⃣ Specific Workflow Recommendations
View High-Impact Workflow Recommendations
agent-performance-analyzer.mdmax-continuations,toolsets: [default, discussions]max-continuations: 5to allow comprehensive multi-run analysis; consider pinningmodel: gpt-5for complex data analysisarchitecture-guardian.mdgithub:accessengine.model: gpt-5, scope toolsets to[repos]since it primarily reads code, addbare: trueto avoid architectural instruction noisedaily-repo-chronicle.mdbash: ["*"],edit:max-continuations: 3to enable multi-phase report generation; considerengine.agent: developerfor specialized dev contextWorkflows with
toolsets: (blank)— audit neededFiles like
weekly-issue-summary.md,sub-issue-closer.md,static-analysis-report.md,stale-repo-identifier.md,weekly-blog-post-writer.md, and 16 others should specify explicit toolsets.security-review.mdtoolsets: [all][repos, code_security, pull_requests]— security review doesn't need discussions or issue creation tools5️⃣ Trends & Historical Comparison
View Trend Analysis (3-Day History)
max-continuationsengine.modeloverrideengine.agent(custom)web-fetchplaywrightcopilot-requestsflagstrict: trueNotable observations:
strict: truedeclining (66% → 58%) — new workflows may not be following the strict mode recommendation.max-continuations,bare,mcp-scripts) show no improvement across 3 days, suggesting these need better documentation or template adoption.6️⃣ Best Practice Guidelines
Based on this research, here are recommended best practices for new Copilot workflows:
Always specify
toolsetsexplicitly: Never leavetoolsets:blank. Choose the minimal set of GitHub MCP toolsets needed (repos,issues,pull_requests,discussions,actions,code_security).Pin version for critical workflows: Add
version: "latest-known-good"to workflows that run on critical schedules or that are expensive to re-run.Use
max-continuationsfor complex analysis: Any workflow that needs to "do a lot of work" (analyze many files, generate comprehensive reports) is a candidate formax-continuations: 3-10.Leverage custom agent files: The 8 unused agent files in
.github/agents/represent significant invested work. Match them to appropriate workflows viaengine.agent.Apply
bare: trueto focused utilities: Workflows that don't need development context (e.g., data analysis, reporting, simple monitoring) should usebare: truefor faster startup and cleaner execution.Enable AWF sandbox for new workflows with network access: The declining AWF adoption rate suggests new workflows aren't defaulting to sandbox mode. Workflows that use
web-fetchorplaywrightparticularly benefit.Set memory limits for AWF containers: Add
sandbox.agent.memory: "4g"to prevent runaway agent processes from consuming all runner memory.7️⃣ Action Items
Immediate (this week):
toolsets:— add explicit toolset liststoolsets: [all]insecurity-review.md,github-mcp-structural-analysis.md,github-mcp-tools-report.mdstrict: trueto the ~14 workflows that don't have itShort-term (this month):
max-continuations: 3-5in 5-10 complex analysis workflowscontribution-checker,adr-writer,grumpy-reviewerbare: trueto 15-20 narrow-scope utility workflowsLong-term (this quarter):
strict: true, explicittoolsets:, andengine.versionmcp-scriptsfor the top 3 workflows with complex GitHub API queriessandbox.agent.memorylimits as a default for all AWF workflowsengine.envpatterns for consistent debug/observability configurationView Research Methodology & References
Research Methodology
Tools used: ripgrep, bash analysis, Go source inspection
Files analyzed:
pkg/workflow/copilot_engine.go— engine capabilitiespkg/workflow/copilot_engine_execution.go— CLI flags and argumentspkg/workflow/copilot_engine_tools.go— tool permissionspkg/workflow/copilot_mcp.go— MCP configurationpkg/workflow/copilot_engine_installation.go— sandbox modespkg/constants/constants.go— feature flags and constantsdocs/src/content/docs/reference/engines.md— documentation.github/workflows/*.mdfilesApproach: Pattern counting via grep/bash, cross-referenced with Go source code to validate feature availability.
References
pkg/workflow/copilot_engine*.godocs/src/content/docs/reference/engines.md.github/agents/*.agent.mdReferences:
Beta Was this translation helpful? Give feedback.
All reactions