Target Workflow
File: .github/workflows/daily-security-red-team.md
Engine: claude
7-day token usage: MCP logs tool timed out — estimate based on file size proxy (27,748 bytes; second-largest workflow with ≥3 LLM phases)
Why This Workflow
daily-security-red-team is the largest claude-engine workflow with multiple distinct LLM-driven phases (8 ##-level phases). It runs daily with a 60-minute timeout, making per-run token cost a meaningful target. Phases 6 and 7 — forensics git-blame extraction and template-based fix-task generation — are purely mechanical tasks that haiku handles reliably, with Phase 6 also having parallelism potential since it is independent of Phase 5.
Optimization — Inline Sub-Agents
LLM Expert Reasoning
- Phase 6 (Forensics Analysis) scores 8/10: Independence 2/3, Haiku-adequacy 2/3, Parallelism 2/2, Size 2/2. It runs
git blame on each finding and extracts commit SHA, author, date, and message — pure extraction with no security judgment required. It is independent of Phase 5 and can run concurrently with Phase 7.
- Phase 7 (Generate Agentic Fix Tasks) scores 5/10: Haiku-adequacy 2/3. It maps 6 well-known finding types to predefined remediation checklists via a
case statement — a lookup/template task with no reasoning needed. Haiku handles this reliably.
- No common tool-call prefix was found qualifying for High or Moderate scoring (phases share only trivial bash boilerplate:
#!/bin/bash, echo; each phase's substantive opening tool calls are distinct).
- Heuristics that fired: "extracting specific fields from structured text" (Phase 6), "converting data from one format to another" and "listing occurrences of a pattern" (Phase 7), "checking whether something meets a stated criterion" (Phase 7 case logic).
Proposed Sub-Agents
1. forensics-extractor (claude-haiku-4.5)
Extracted task: Run git blame on each finding and extract commit origin metadata.
Why haiku: Pure extractive task — git blame parsing + field extraction; no security reasoning.
Score: 8/10 (independence: 2, haiku-adequacy: 2, parallelism: 2, size: 2)
Estimated savings: ~6–10% main-model token reduction (Phase 6 is ~6% of prompt body)
Agent definition (copy-paste ready)
## agent: `forensics-extractor`
---
description: Run git blame on findings to extract commit origin metadata
model: claude-haiku-4.5
---
You receive a list of security findings as TYPE:FILE:LINE strings, one per line.
For each finding where LINE is not "0" and the FILE exists:
1. Run: git blame -L "$LINE,$LINE" --porcelain "$FILE" 2>/dev/null
2. Extract the 40-char commit SHA from the first line; shorten to 7 chars
3. Run: git log -1 --format="%an|%ai|%s" "$COMMIT_SHA" to get author, date, message
Output exactly one JSON object per line (no array wrapper):
{"finding":"TYPE:FILE:LINE","commit":"abc1234","author":"Name","date":"2026-01-01","message":"msg"}
If git blame fails, LINE is "0", or FILE does not exist, use "unknown" for all git fields.
Emit no other text outside the JSON lines.
Invocation change in main prompt:
Before:
## Phase 6: Forensics Analysis
```bash
#!/bin/bash
if [ ${#FINDINGS[@]} -gt 0 ]; then
echo "🔍 Performing forensics analysis..."
FORENSICS_DATA=()
for finding in "${FINDINGS[@]}"; do
...git blame loop...
done
fi
Phase 6: Forensics Analysis
Pass the FINDINGS array (one entry per line) to the forensics-extractor agent.
Collect its JSON-per-line output into FORENSICS_DATA for Phase 7.
---
#### 2. `fix-task-generator` (`claude-haiku-4.5`)
**Extracted task**: Generate a markdown remediation checklist from typed findings and forensics data.
**Why haiku**: Template/lookup task — maps 6 known finding types to predefined remediation steps; no security reasoning.
**Score**: 5/10 (independence: 1, haiku-adequacy: 2, parallelism: 1, size: 1)
**Estimated savings**: ~8–12% main-model token reduction (Phase 7 is ~12% of prompt body)
<details>
<summary>Agent definition (copy-paste ready)</summary>
```markdown
## agent: `fix-task-generator`
---
description: Generate markdown remediation checklist from classified security findings
model: claude-haiku-4.5
---
You receive security finding records as JSON objects, one per line:
{"finding":"TYPE:FILE:LINE","commit":"...","author":"...","date":"...","message":"..."}
For each record, produce one markdown checklist item based on TYPE:
- SECRET_EXFIL: review and remove exfiltration; verify call legitimacy; rotate exposed credentials
- DYNAMIC_EXEC: audit dynamic execution; replace eval/exec with safer alternatives; sanitize inputs
- OBFUSCATION: decode and investigate; if legitimate add a comment; if malicious remove and investigate
- DANGEROUS_OPS: validate all file paths; use safe operation alternatives; add permission checks
- SUSPICIOUS_DOMAIN: verify domain legitimacy; remove if suspicious; replace with internal service
- SUSPICIOUS_KEYWORDS: determine intent; remove if malicious or rename if legitimate; review history
- Other: investigate the pattern; review for security risk; remediate if malicious; document findings
Format:
- [ ] **Task N**: [Action] in `FILE:LINE`
- [Step 1]
- [Step 2]
- [Step 3]
- Forensics: introduced in commit `COMMIT` by AUTHOR on DATE ("MESSAGE") [omit if commit == "unknown"]
Output only markdown. No preamble, no code fences.
Invocation change in main prompt:
Before:
## Phase 7: Generate Agentic Fix Tasks
```bash
if [ ${#FINDINGS[@]} -gt 0 ]; then
for i in "${!FORENSICS_DATA[@]}"; do
...large case statement...
done
fi
Phase 7: Generate Agentic Fix Tasks
Pass the FORENSICS_DATA JSON lines (from Phase 6) to the fix-task-generator agent.
Use its markdown output as FIX_TASKS in Phase 8.
---
### Frontmatter Change Required
Add to frontmatter:
```yaml
features:
inline-agents: true
Estimated Impact
| Metric |
Before |
After (estimated) |
| Main-model phases |
8 sequential |
6 main + 2 delegated |
| Phase 6 context load |
~6% of prompt body in main model |
Delegated to haiku |
| Phase 7 context load |
~12% of prompt body in main model |
Delegated to haiku |
| Combined main-model savings |
— |
~15–20% per run |
| Parallelism opportunity |
None |
Phase 6 + Phase 5 can overlap |
Implementation Steps
- Add
features: inline-agents: true to the frontmatter of .github/workflows/daily-security-red-team.md
- Add the
forensics-extractor agent block at the bottom of the file, after all workflow content
- Add the
fix-task-generator agent block below forensics-extractor
- Replace the Phase 6 bash block with the one-line
forensics-extractor invocation shown above
- Replace the Phase 7 bash block with the one-line
fix-task-generator invocation shown above
- Compile:
gh aw compile daily-security-red-team
- Test:
gh workflow run daily-security-red-team.yml
References
Generated by Daily Sub-Agent Optimizer · ● 219.9K · ◷
Target Workflow
File:
.github/workflows/daily-security-red-team.mdEngine:
claude7-day token usage: MCP logs tool timed out — estimate based on file size proxy (27,748 bytes; second-largest workflow with ≥3 LLM phases)
Why This Workflow
daily-security-red-teamis the largestclaude-engine workflow with multiple distinct LLM-driven phases (8##-level phases). It runs daily with a 60-minute timeout, making per-run token cost a meaningful target. Phases 6 and 7 — forensics git-blame extraction and template-based fix-task generation — are purely mechanical tasks that haiku handles reliably, with Phase 6 also having parallelism potential since it is independent of Phase 5.Optimization — Inline Sub-Agents
LLM Expert Reasoning
git blameon each finding and extracts commit SHA, author, date, and message — pure extraction with no security judgment required. It is independent of Phase 5 and can run concurrently with Phase 7.casestatement — a lookup/template task with no reasoning needed. Haiku handles this reliably.#!/bin/bash,echo; each phase's substantive opening tool calls are distinct).Proposed Sub-Agents
1.
forensics-extractor(claude-haiku-4.5)Extracted task: Run
git blameon each finding and extract commit origin metadata.Why haiku: Pure extractive task — git blame parsing + field extraction; no security reasoning.
Score: 8/10 (independence: 2, haiku-adequacy: 2, parallelism: 2, size: 2)
Estimated savings: ~6–10% main-model token reduction (Phase 6 is ~6% of prompt body)
Agent definition (copy-paste ready)
Invocation change in main prompt:
Before:
Phase 6: Forensics Analysis
Pass the FINDINGS array (one entry per line) to the
forensics-extractoragent.Collect its JSON-per-line output into FORENSICS_DATA for Phase 7.
Invocation change in main prompt:
Before:
Phase 7: Generate Agentic Fix Tasks
Pass the FORENSICS_DATA JSON lines (from Phase 6) to the
fix-task-generatoragent.Use its markdown output as FIX_TASKS in Phase 8.
Estimated Impact
Implementation Steps
features: inline-agents: trueto the frontmatter of.github/workflows/daily-security-red-team.mdforensics-extractoragent block at the bottom of the file, after all workflow contentfix-task-generatoragent block belowforensics-extractorforensics-extractorinvocation shown abovefix-task-generatorinvocation shown abovegh aw compile daily-security-red-teamgh workflow run daily-security-red-team.ymlReferences