Conversation
- Port 5 skill items from v4.0.3: * Utilities/AudioEditor/ * Utilities/Delegation/ * Research/MigrationNotes.md * Research/Templates/ * Agents/ClaudeResearcherContext.md - Port 9 PAI/ flat docs from v4.0.3 (CLI.md, CLIFIRSTARCHITECTURE.md, etc.) - Port 3 PAI/ subdirs (ACTIONS/, FLOWS/, PIPELINES/) - Create BuildOpenCode.ts from BuildCLAUDE.ts - Update Utilities/SKILL.md with AudioEditor + Delegation - Update MINIMAL_BOOTSTRAP.md (remove USMetrics, fix Telos path, add new skills) - Replace all .claude/ references with .opencode/ Note: USMetrics was already removed from repo (noted in PR). Note: Telos was already flattened (verified).
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (60)
📝 WalkthroughWalkthroughDieses PR fügt ein Actions-/Pipelines-Framework (runner.ts + runner.v2 + pipeline-runner), zwei Beispiel-Aktionen (A_EXAMPLE_SUMMARIZE, A_EXAMPLE_FORMAT), eine PAI-CLI, Typdefinitionen, viele Dokumentationen sowie einen neuen AudioEditor- und Delegation-Skill mit zugehörigen Tools und Workflows hinzu. (Kurzfassung, faktisch.) Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant CLI as "PAI CLI"
participant Pipeline as "Pipeline Runner"
participant Runner as "Action Runner (runner.v2)"
participant Action as "Action Module"
participant LLM as "LLM Service"
Note over Pipeline,Runner: Ausführung P_EXAMPLE_SUMMARIZE_AND_FORMAT
User->>CLI: run pipeline P_EXAMPLE_SUMMARIZE_AND_FORMAT
CLI->>Pipeline: runPipeline(name,input)
Pipeline->>Pipeline: loadPipeline(definition)
Pipeline->>Runner: runAction("A_EXAMPLE_SUMMARIZE", input)
Runner->>Action: loadManifest & loadImplementation
Runner->>Action: execute(input, ctx)
Action->>LLM: llm(prompt, {tier:"fast"})
LLM->>Action: summary
Action->>Runner: {summary, word_count}
Runner->>Pipeline: output
Pipeline->>Runner: runAction("A_EXAMPLE_FORMAT", output)
Runner->>Action: execute(output, ctx)
Action->>Action: format to markdown
Action->>Runner: {formatted, format}
Runner->>Pipeline: final output
Pipeline->>CLI: result JSON
CLI->>User: print JSON
sequenceDiagram
actor User
participant CLI as "PAI CLI"
participant Runner as "Action Runner (runner.v2)"
participant Cloud as "Cloudflare Worker"
participant Action as "Action Module"
participant LLM as "LLM Service"
User->>CLI: run action --mode cloud|local
CLI->>Runner: runAction(name,input,{mode})
alt mode == local
Runner->>Action: loadImplementation + execute(input,ctx)
alt action requires llm
Action->>LLM: llm(...)
LLM->>Action: response
end
Action->>Runner: output
Runner->>CLI: ActionResult
else mode == cloud
Runner->>Cloud: POST /run-action (input)
Cloud->>Runner: proxy response (exec server-side)
Cloud->>Action: execute(input,ctx)
Action->>LLM: llm(...)
LLM->>Action: response
Action->>Cloud: output
Cloud->>Runner: response
Runner->>CLI: ActionResult
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 11
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
🟡 Minor comments (17)
.opencode/PAI/Tools/BuildOpenCode.ts-18-18 (1)
18-18:⚠️ Potential issue | 🟡 Minor
process.env.HOMEkönnte auf manchen Systemen undefiniert sein.Auf Windows-Systemen ist
HOMEoft nicht gesetzt (stattdessenUSERPROFILE). Die Non-Null-Assertion (!) unterdrückt den TypeScript-Fehler, führt aber zu einem ungültigen Pfad, wennHOMEundefiniert ist.🛡️ Vorgeschlagene Absicherung
-const PAI_DIR = join(process.env.HOME!, ".opencode"); +const PAI_DIR = join(process.env.HOME || process.env.USERPROFILE || "~", ".opencode");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/PAI/Tools/BuildOpenCode.ts at line 18, Die Zuweisung von PAI_DIR verwendet process.env.HOME! which can be undefined on some systems; replace it with a robust home-directory lookup (e.g. use Node's os.homedir() or a fallback chain like process.env.HOME || process.env.USERPROFILE || os.homedir()) and remove the non-null assertion; update the declaration of PAI_DIR (and any import of join) so it uses the resolved home directory value instead of process.env.HOME!, referencing the symbol PAI_DIR and the use of join/process.env.HOME in BuildOpenCode.ts..opencode/skills/Research/Templates/MarketResearch.md-1-3 (1)
1-3:⚠️ Potential issue | 🟡 Minor
USE WHEN-Trigger für diese Vorlage ergänzen.Unter
.opencode/skills/**fehlt hier ein kurzer Auswahlabschnitt, der klar macht, wannMarketResearch.mdstatt anderer Research-Templates verwendet werden soll. Ohne diese Signale ist die Template-Auswahl im Skill-Flow uneinheitlich.As per coding guidelines, ".opencode/skills/**: Follow PAI Skills format and USE WHEN triggers in
.opencode/skills/**files"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/skills/Research/Templates/MarketResearch.md around lines 1 - 3, Add a concise "USE WHEN" trigger section near the top of MarketResearch.md that follows the PAI Skills format used in .opencode/skills/**: a short heading "USE WHEN" plus 2–4 bullet criteria that clearly signal when to pick this template (e.g., prompts about market sizing, competitor analysis, go-to-market strategy, TAM/SAM/SOM, customer segments), and include any required metadata tag or keyword list used by the skill selector so the Skill-Flow can unambiguously choose MarketResearch.md over other Research templates. Ensure the section is brief, uses the same casing/format as other .opencode/skills/** templates, and references "MarketResearch.md" as the document being updated..opencode/skills/Research/Templates/MarketResearch.md-20-44 (1)
20-44:⚠️ Potential issue | 🟡 MinorBewertungskriterien für
TrendsundInvestorsfehlen.Die Vorlage definiert sechs Entity-Kategorien, aber die CRITICAL/HIGH/MEDIUM/LOW-Rubrik deckt nur vier davon ab. Dadurch ist die Priorisierung für
TrendsundInvestorsnicht konsistent spezifiziert.Vorschlag
## Evaluation Criteria (What Makes Something CRITICAL?) @@ **Technologies:** - CRITICAL: Foundational tech that enables the entire market - HIGH: Widely adopted frameworks/standards - MEDIUM: Emerging tech with growing adoption - LOW: Experimental, limited adoption + +**Trends:** +- CRITICAL: Market-wide shifts with measurable business impact +- HIGH: Clear momentum with adoption or funding signals +- MEDIUM: Early signals with partial validation +- LOW: Speculative or weakly evidenced patterns + +**Investors:** +- CRITICAL: Firms repeatedly backing category leaders in this market +- HIGH: Active investors with strong recent deal flow +- MEDIUM: Occasional participants with limited specialization +- LOW: Peripheral or inactive investors in this space🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/skills/Research/Templates/MarketResearch.md around lines 20 - 44, The template is missing CRITICAL/HIGH/MEDIUM/LOW evaluation bullets for the Trends and Investors entity types; add two new sections titled "Trends:" and "Investors:" in the same format as "Companies:", "Products:", "People:", and "Technologies:" and provide four bullets each (CRITICAL, HIGH, MEDIUM, LOW) that mirror the level-of-impact language used elsewhere (e.g., for Trends: CRITICAL = market-defining macro trends, HIGH = rapidly accelerating trends with broad adoption, MEDIUM = emerging trends with niche momentum, LOW = speculative/short-lived buzz; for Investors: CRITICAL = top-tier strategic funds/influential angels, HIGH = well-networked VCs with sector focus, MEDIUM = active but lower-profile investors, LOW = sporadic/new/inactive investors)..opencode/skills/Agents/ClaudeResearcherContext.md-95-95 (1)
95-95:⚠️ Potential issue | 🟡 MinorSprach-Identifikator für Fenced Code Block fehlt.
Der Fenced Code Block sollte einen Sprach-Identifikator haben (z.B.
markdownodertext), um den Markdown-Standards zu entsprechen.📝 Vorgeschlagener Fix
-``` +```markdown ## Research Report🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/skills/Agents/ClaudeResearcherContext.md at line 95, The fenced code block opening currently uses plain backticks (```) with no language identifier; update the opening fence to include a language (e.g., change the opening ``` to ```markdown) so the block that contains the "## Research Report" heading is properly marked as Markdown and conforms to Markdown standards..opencode/skills/Research/MigrationNotes.md-92-97 (1)
92-97:⚠️ Potential issue | 🟡 MinorWiderspruch in der Dokumentation.
Line 45 dokumentiert, dass
Conduct.mdentfernt wurde ("Conduct.md and PerplexityResearch.md were later removed"), aber Line 94 referenziert noch "(5 total with conduct.md)". Diese Inkonsistenz sollte korrigiert werden.📝 Vorgeschlagene Korrektur
-✅ 4 new commands in Workflows/ (5 total with conduct.md) +✅ 4 new commands in Workflows/🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/skills/Research/MigrationNotes.md around lines 92 - 97, The "Success Criteria Met" section is inconsistent about Conduct.md: locate the "Success Criteria Met" block (the heading and the bullet "(5 total with conduct.md)") and reconcile it with the earlier statement that "Conduct.md and PerplexityResearch.md were later removed"; either remove the parenthetical "(5 total with conduct.md)" or update it to reflect the current count and removal, and ensure mentions of "Conduct.md" and "PerplexityResearch.md" in the document are consistent with the removal statement..opencode/skills/Utilities/Delegation/SKILL.md-60-78 (1)
60-78:⚠️ Potential issue | 🟡 MinorAbschnittsnummerierung korrigieren: Doppelte Nummer "3".
Die Abschnitte "Background Agents" (Zeile 60) und "Foreground Agents" (Zeile 72) haben beide die Nummer "3". Der Foreground-Abschnitt sollte "4" sein.
🔧 Vorgeschlagene Korrektur
-### 3. Foreground Agents +### 4. Foreground AgentsUnd entsprechend die nachfolgenden Abschnitte:
- "Custom Agents" → 5
- "Agent Teams" → 6
- "Parallel Task Dispatch" → 7
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/skills/Utilities/Delegation/SKILL.md around lines 60 - 78, Duplicate section numbering: both "Background Agents" and "Foreground Agents" are labeled "3"; change "Foreground Agents" to "4" and then increment subsequent section numbers accordingly ("Custom Agents" → 5, "Agent Teams" → 6, "Parallel Task Dispatch" → 7). Update the Markdown headings for "Foreground Agents", "Custom Agents", "Agent Teams", and "Parallel Task Dispatch" so their numeric prefixes match the new sequence, ensuring consistency for any references to those headings..opencode/PAI/FLOWS.md-342-344 (1)
342-344:⚠️ Potential issue | 🟡 MinorFalscher Rat zur Kostenreduzierung.
Der Text sagt "Reduce
intervalMinutes", aber um die Kosten zu senken, sollte das Intervall erhöht (nicht reduziert) werden. Ein höhererintervalMinutes-Wert bedeutet weniger häufige Ausführungen.✏️ Vorgeschlagene Korrektur
### High costs -Reduce `intervalMinutes` in `flow-index.json` and redeploy the Worker with updated cron. +Increase `intervalMinutes` in `flow-index.json` and redeploy the Worker with updated cron.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/PAI/FLOWS.md around lines 342 - 344, Die Anweisung im Abschnitt "High costs" ist falsch; statt "Reduce `intervalMinutes` in `flow-index.json`" soll dort stehen, dass `intervalMinutes` erhöht werden muss, um Kosten zu senken; bitte aktualisiere den Text unter dem "High costs"-Header in .opencode/PAI/FLOWS.md so dass er sagt "Increase `intervalMinutes` in `flow-index.json` and redeploy the Worker with updated cron" (oder eine äquivalente Formulierung), und behalte die Referenz zu `flow-index.json` und `intervalMinutes` bei..opencode/skills/Utilities/AudioEditor/Tools/Edit.ts-171-175 (1)
171-175:⚠️ Potential issue | 🟡 MinorFehlende Fehlerbehandlung bei ffprobe-Ausgabe.
Wenn
ffprobefehlschlägt oder ungültiges JSON zurückgibt, wird das Skript mit einem unbehandelten Fehler abstürzen. Da der Hauptprozess (ffmpegResult) bereits geprüft wurde, sollte auch die Verifizierung robust sein.🛡️ Vorgeschlagene Absicherung
// Verify output -const outProbe = await $`ffprobe -v quiet -print_format json -show_format ${outFile}`.quiet(); -const outData = JSON.parse(outProbe.text()); -const outDuration = parseFloat(outData.format.duration); -const outSize = Math.round(parseInt(outData.format.size) / 1024 / 1024); +const outProbe = await $`ffprobe -v quiet -print_format json -show_format ${outFile}`.quiet().nothrow(); +if (outProbe.exitCode !== 0) { + console.log(`\n=== Edit Complete (verification skipped) ===`); + console.log(`Output: ${outFile}`); + process.exit(0); +} +const outData = JSON.parse(outProbe.text()); +const outDuration = parseFloat(outData.format?.duration ?? "0"); +const outSize = Math.round(parseInt(outData.format?.size ?? "0") / 1024 / 1024);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/skills/Utilities/AudioEditor/Tools/Edit.ts around lines 171 - 175, The ffprobe verification code (outProbe, outData, outDuration, outSize) lacks error handling and will crash if ffprobe fails or returns invalid JSON; wrap the ffprobe invocation and JSON.parse in a try/catch, check the child's execution result (e.g., outProbe.exitCode or outProbe.stderr/text) before parsing, validate that outData.format with numeric duration and size fields exist (use Number.isFinite on parsed values) and throw or return a clear, descriptive error if validation fails so subsequent logic doesn't operate on undefined values..opencode/skills/Utilities/AudioEditor/Tools/Edit.ts-64-64 (1)
64-64:⚠️ Potential issue | 🟡 MinorIrreführender Kommentar zu
sampleRate.Der Kommentar "default, will be read from stream" suggeriert, dass der Wert später aus dem Audio-Stream gelesen wird, aber dies geschieht nirgendwo im Code. Der Wert bleibt hardcoded auf 48000.
✏️ Vorgeschlagene Korrektur
-const sampleRate = 48000; // default, will be read from stream +const sampleRate = 48000; // Standard sample rate for outputAlternativ könnte der Sample-Rate-Wert aus dem ffprobe-Output extrahiert werden:
const sampleRate = probeData.streams?.[0]?.sample_rate ? parseInt(probeData.streams[0].sample_rate) : 48000;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/skills/Utilities/AudioEditor/Tools/Edit.ts at line 64, Der Kommentar zu sampleRate ist irreführend: entweder aktualisiere den Kommentar, sodass er nicht behauptet, der Wert werde aus dem Stream gelesen, oder tatsächlich extrahiere die Sample-Rate aus dem ffprobe-/probeData-Output; konkret: replace the hardcoded sampleRate = 48000 with logic that checks probeData.streams?.[0]?.sample_rate and parses it to an integer (fallback to 48000), or if you keep the hardcode simply change the comment to state it's a fixed default and not read from the stream; refer to the sampleRate variable and probeData.streams[0].sample_rate when making the change..opencode/PAI/ACTIONS/lib/runner.v2.ts-31-35 (1)
31-35:⚠️ Potential issue | 🟡 MinorHardcodierter Pfad zu Inference.ts
Der Pfad
join(process.env.HOME!, ".opencode/PAI/Tools/Inference.ts")ist hardcodiert und verwendet den Non-Null-Assertion-Operator. Bei fehlender HOME-Variable oder wenn Inference.ts nicht existiert, gibt es unkontrollierte Fehler.🛡️ Robustere Pfadauflösung
async function createLocalLLM(): Promise<ActionCapabilities["llm"]> { + const home = process.env.HOME; + if (!home) { + throw new Error("HOME environment variable not set"); + } + const inferencePath = join(home, ".opencode/PAI/Tools/Inference.ts"); const inferenceModule = await import( - join(process.env.HOME!, ".opencode/PAI/Tools/Inference.ts") + inferencePath );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/PAI/ACTIONS/lib/runner.v2.ts around lines 31 - 35, The createLocalLLM function currently imports Inference.ts via a hardcoded path using process.env.HOME! which can throw if HOME is missing or the file is absent; change it to resolve the module more robustly (e.g. read a configurable env var like PAI_TOOLS_DIR with a sensible default, or use require.resolve/attempt multiple candidate locations) and remove the non-null assertion; wrap the dynamic import in a try/catch that validates the resolved path exists and throws a clear, recoverable error if not found, and then extract the inference export as before (referencing createLocalLLM and the inference symbol) so callers get deterministic error messages instead of uncontrolled exceptions..opencode/skills/Utilities/AudioEditor/Tools/Analyze.ts-265-273 (1)
265-273:⚠️ Potential issue | 🟡 MinorFragile JSON-Extraktion aus LLM-Antwort
Die Regex-basierte JSON-Extraktion
text.match(/\[[\s\S]*\]/)kann bei verschachtelten Arrays oder JSON mit eckigen Klammern in Strings fehlschlagen.🛡️ Robustere JSON-Extraktion
// Parse JSON from response (handle potential markdown wrapping) let edits: EditDecision[]; try { - const jsonMatch = text.match(/\[[\s\S]*\]/); - edits = jsonMatch ? JSON.parse(jsonMatch[0]) : []; + // Try direct parse first + edits = JSON.parse(text); + } catch { + // Try extracting from markdown code block + try { + const codeBlockMatch = text.match(/```(?:json)?\s*([\s\S]*?)```/); + if (codeBlockMatch) { + edits = JSON.parse(codeBlockMatch[1].trim()); + } else { + const jsonMatch = text.match(/\[[\s\S]*\]/); + edits = jsonMatch ? JSON.parse(jsonMatch[0]) : []; + } + } catch { + console.error(` parse error`); + continue; + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/skills/Utilities/AudioEditor/Tools/Analyze.ts around lines 265 - 273, The current JSON extraction in Analyze.ts uses a fragile regex text.match(/\[[\s\S]*\]/) to populate edits (EditDecision[]) which breaks for nested arrays or brackets in strings; update the parsing in the try block used to set edits to first attempt extracting a JSON code block (match triple-backtick blocks with optional language and capture the inner text), parse that if present (trim before parsing), otherwise fall back to a safer extraction (e.g., the existing array-match) and ensure JSON.parse errors are caught and logged with the actual error; adjust the logic around the edits variable assignment and the surrounding try/catch so parse failures log error details instead of the bare " parse error" message..opencode/PAI/ACTIONS/pai.ts-144-148 (1)
144-148:⚠️ Potential issue | 🟡 MinorJSON.parse ohne Fehlerbehandlung
JSON.parse(stdinContent)undJSON.parse(options.input)können bei ungültigem JSON eine Exception werfen, die nicht abgefangen wird. Dies führt zu unklaren Fehlermeldungen für den Benutzer.🛡️ Fehlerbehandlung hinzufügen
if (stdinContent) { - input = JSON.parse(stdinContent); + try { + input = JSON.parse(stdinContent); + } catch (e) { + console.error("Error: Invalid JSON from stdin"); + process.exit(1); + } } else if (options.input) { - input = JSON.parse(options.input); + try { + input = JSON.parse(options.input); + } catch (e) { + console.error("Error: Invalid JSON in --input"); + process.exit(1); + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/PAI/ACTIONS/pai.ts around lines 144 - 148, Die JSON.parse-Aufrufe für stdinContent und options.input (im Block, der stdinContent, options.input und extra prüft) werfen bei ungültigem JSON unkontrolliert Exceptions; um das zu beheben, wickle die Parsers in try/catch-Blöcke um (für stdinContent und options.input separat), fange SyntaxError ab, nutze processLogger.error oder ähnliche Logging-Funktion, gib eine klare, benutzerfreundliche Fehlermeldung inkl. kurzer Details zum Parse-Fehler aus und brich dann sauber ab oder setze input auf einen definierten Fallback; referenziere dabei die Variablen stdinContent, options.input und input sowie den umgebenden Entscheidungsblock..opencode/skills/Utilities/AudioEditor/Tools/Transcribe.ts-105-110 (1)
105-110:⚠️ Potential issue | 🟡 MinorFehlende Fehlerbehandlung nach Transkription
Nach beiden Whisper-Varianten wird die Output-Datei in Zeile 106 gelesen, ohne zu prüfen, ob sie existiert. Falls beide Varianten fehlschlagen, aber der Prozess nicht beendet wird, könnte ein Fehler auftreten.
🛡️ Vorgeschlagene Validierung
// Validate output +if (!existsSync(outFile)) { + console.error("No transcript file was produced."); + process.exit(1); +} const transcript = JSON.parse(await Bun.file(outFile).text());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/skills/Utilities/AudioEditor/Tools/Transcribe.ts around lines 105 - 110, The code reads and parses outFile without verifying it exists; wrap the read/parse in a try/catch and validate the result before using it: attempt to read JSON from outFile (JSON.parse(await Bun.file(outFile).text())) inside a try block, on error or if transcript is falsy/has no chunks/text log a clear error including outFile and either throw or return early, and compute chunkCount/textLen only after confirming transcript and transcript.chunks/transcript.text are present; reference the outFile variable and the transcript/chunkCount/textLen usage in Transcribe.ts when adding this guard..opencode/PAI/ACTIONS/lib/pipeline-runner.ts-110-114 (1)
110-114:⚠️ Potential issue | 🟡 MinorCLI-Argument-Parsing ohne Validierung
Das Argument-Parsing nimmt an, dass Argumente immer paarweise kommen (
key value). Bei ungerader Argumentanzahl wirdargs[i + 1]zuundefined, was zuJSON.parse(undefined)führt - ein TypeError.🐛 Validierung hinzufügen
for (let i = 2; i < args.length; i += 2) { const key = args[i].replace(/^--/, ""); - let value: unknown = args[i + 1]; + let value: unknown = args[i + 1] ?? ""; + if (value === undefined) { + console.error(`Missing value for argument: ${args[i]}`); + process.exit(1); + } try { value = JSON.parse(value as string); } catch {} input[key] = value; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/PAI/ACTIONS/lib/pipeline-runner.ts around lines 110 - 114, Die Schleife, die CLI-Argumente in pipeline-runner.ts verarbeitet (beginnend bei for (let i = 2; i < args.length; i += 2)), nimmt an, dass stets ein Partnerwert für jeden Schlüssel existiert; prüfe vor dem Zugriff auf args[i + 1], ob dieser definiert ist (oder ob args.length - i >= 2), und handle fehlende Werte explizit (z. B. Fehler werfen oder den Schlüssel überspringen), und nur wenn value !== undefined versuche JSON.parse; verweise auf die Variablen/Begriffe args, input und die for-Schleife beim Einbau der Validierung.docs/epic/TODO-v3.0.md-170-179 (1)
170-179:⚠️ Potential issue | 🟡 MinorLeerzeile innerhalb eines Blockquotes
Statische Analyse meldet eine Leerzeile innerhalb eines Blockquotes (MD028). Die zwei
> [!NOTE]Blöcke sind durch eine Leerzeile getrennt, aber beide verwenden>am Anfang.📝 Korrektur
> [!NOTE] > Already present in `.opencode/PAI/` (no action needed): `ACTIONS.md`, `AISTEERINGRULES.md`, > `CONTEXT_ROUTING.md`, `MEMORYSYSTEM.md`, `MINIMAL_BOOTSTRAP.md`, `PAISYSTEMARCHITECTURE.md`, > `PRDFORMAT.md`, `SKILL.md`, `SKILLSYSTEM.md`, `THEDELEGATIONSYSTEM.md`, `THEHOOKSYSTEM.md`, `TOOLS.md` - + > [!NOTE] > Already present in `.opencode/skills/PAI/SYSTEM/` (docs exist, also belong in PAI/ per v4.0.3 arch):Alternativ: Leerzeile ohne
>verwenden oder die Notes zusammenführen.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/epic/TODO-v3.0.md` around lines 170 - 179, Die statische Analyse meldet MD028 wegen einer Leerzeile innerhalb eines Blockquotes — zwischen den beiden `> [!NOTE]`-Blöcken steht eine leere Zeile, die ebenfalls mit `>` beginnt; entferne die leere `>`-Zeile oder wandle sie in eine normale (ohne `>`) Leerzeile, oder fasse die beiden `> [!NOTE]`-Blöcke zusammen, sodass keine leere `>`-Zeile mehr zwischen den Blockquote-Zeilen (`> [!NOTE]`) steht..opencode/skills/Utilities/AudioEditor/Tools/Transcribe.ts-54-61 (1)
54-61:⚠️ Potential issue | 🟡 MinorLogikfehler bei Fallback-Prüfung
Die Fallback-Bedingung in Zeile 61 prüft
!hasFastWhisper || !existsSync(outFile). Wenninsanely-fast-whispermitexitCode !== 0fehlschlägt (Zeilen 54-58), wird nur eine Warnung ausgegeben, aberhasFastWhisperbleibttrue. Der Code fällt dann nur zurück, wenn die Output-Datei nicht existiert.Das Problem: Zwischen Zeile 58 und 61 gibt es keine explizite Prüfung, ob der Fehlerfall auch tatsächlich zum Fallback führt.
🐛 Vorgeschlagene Korrektur
+ let fastWhisperSucceeded = false; if (hasFastWhisper) { console.log("Using insanely-fast-whisper (MPS accelerated)..."); const result = await $`insanely-fast-whisper \ --file-name ${inputFile} \ --transcript-path ${outFile} \ --device-id mps \ --timestamp word \ --model-name openai/whisper-large-v3 \ --batch-size 4 2>&1`.quiet().nothrow(); if (result.exitCode !== 0) { console.error("insanely-fast-whisper failed, trying standard whisper..."); } else { + fastWhisperSucceeded = true; console.log("Transcription complete."); } } - if (!hasFastWhisper || !existsSync(outFile)) { + if (!fastWhisperSucceeded || !existsSync(outFile)) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/skills/Utilities/AudioEditor/Tools/Transcribe.ts around lines 54 - 61, Die Fallback-Prüfung verwendet hasFastWhisper, bleibt aber true, wenn insanely-fast-whisper mit result.exitCode !== 0 fehlschlägt; ändere die Fehlerbehandlung in der Stelle, die result verarbeitet (die if-else mit result.exitCode), so dass im Fehlerfall entweder hasFastWhisper auf false gesetzt wird oder ein eigenes Flag (z.B. useFallback) gesetzt wird und dann die spätere Bedingung if (!hasFastWhisper || !existsSync(outFile)) zuverlässig auslöst; referenziere dabei die Variablen result, hasFastWhisper und outFile bzw. die Transcribe-Funktion/Block, damit der Fallback immer erfolgt, wenn insanely-fast-whisper einen Fehler geliefert hat..opencode/PAI/ACTIONS/pai.ts-215-216 (1)
215-216:⚠️ Potential issue | 🟡 MinorZugriff auf interne Zod-Eigenschaften vermeiden
Der Code greift auf
inputSchema._defundoutputSchema._defzu, die interne Zod-Eigenschaften darstellen und nicht Teil der öffentlichen API sind. Diese könnten sich in künftigen Zod-Versionen ändern und zu Fehlern führen.Erwägen Sie, die Schemas mit dokumentierten Zod-Methoden zu serialisieren oder die Schema-Definitionen auf separaten Feldern zu speichern.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/PAI/ACTIONS/pai.ts around lines 215 - 216, Vermeide den Zugriff auf die internen Zod-Felder inputSchema._def und outputSchema._def; stattdessen serialisiere die Schemas mit einer dokumentierten Methode (z. B. 호출 zodToJsonSchema(action.inputSchema) / zodToJsonSchema(action.outputSchema) wenn ihr die Bibliothek nutzt) oder erweitere das Action-Objekt um explizite, öffentliche Felder wie action.inputSchemaDef und action.outputSchemaDef die die serialisierbare Repräsentation enthalten, und aktualisiere die Stelle, die momentan inputSchema._def / outputSchema._def liest, so dass sie die neue, öffentliche Quelle verwendet (referenz: action, inputSchema, outputSchema, inputSchemaDef, outputSchemaDef, zodToJsonSchema).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.opencode/PAI/ACTIONS/lib/pipeline-runner.ts:
- Around line 53-57: The loop currently does a dynamic import of "./runner.v2"
on every iteration; move the import and destructuring (const { runAction } =
await import("./runner.v2")) to before the for (const actionName of
pipeline.actions) loop so runAction is resolved once and reused; then inside the
loop just call await runAction(actionName, data). Ensure any await/import error
handling remains appropriate around the single import.
In @.opencode/PAI/ACTIONS/lib/runner.ts:
- Around line 129-136: Der fetch-Aufruf (const response = await fetch(workerUrl,
...)) hat kein hartes Timeout und kann den Runner blockieren; fix: create an
AbortController, pass controller.signal into fetch options, start a setTimeout
that calls controller.abort() after the configured timeout (use the
deployment.timeout or ctx.deployment.timeout value), and clear the timeout once
fetch settles; ensure you handle the abort case (catch AbortError) and
propagate/convert it to a meaningful timeout error so caller code can react.
- Around line 240-243: Die aktuelle Prüfung `if (!input) { ... }` verwirft
gültige falsy-JSON-Werte (0, false, "", null); ändere die Prüfung so, dass nur
wirklich fehlender Input erkannt wird: ersetze die falsy-Check durch eine
Identitätsprüfung auf undefined (z.B. `input === undefined`) und belasse die
bestehende Fehlermeldung und `process.exit(1)` im Block; suche nach der Variable
`input` in runner.ts rund um die bestehenden console.error/process.exit-Aufrufe,
um die Änderung vorzunehmen.
- Around line 200-203: Bei der Argument-Parsing-Schleife (for ... over args)
wird das Argument nach "--mode" ungeprüft in die Variable mode gesetzt, wodurch
Tippfehler wie "--mode clodu" stillschweigend zu "local" führen; ändere die
Logik in der Schleife so dass nach Erkennen von "--mode" der nächste Wert
explizit gegen die erlaubten Strings "local" und "cloud" validiert wird (z. B.
in der gleichen Schleife/Block um mode zu setzen), und falls der Wert ungültig
ist, gib eine aussagekräftige Fehlermeldung (z. B. via console.error oder
processLogger) und beende den Prozess mit einem Fehlercode oder wirf einen Error
statt stillschweigend auf den Default zurück; referenziere dabei die Variable
mode und die Flag-Erkennung ("--mode") in runner.ts.
- Around line 30-35: The loadAction function constructs actionPath from
untrusted name and directly imports it, enabling path-traversal; to fix,
validate and canonicalize name before building actionPath: reject names
containing path separators that escape the actions tree (e.g., '..' segments or
absolute paths), allow only a safe whitelist pattern (e.g., alphanumeric, dash,
underscore and single '/' for category), then compute the resolved path (use
path.join/path.resolve) and assert the resolved path startsWith the ACTIONS_DIR;
if the check fails, throw an error and do not import. Update references:
loadAction, ACTIONS_DIR, actionPath and the dynamic import call to use the
validated/resolved path.
- Around line 256-257: The current bootstrap (import.meta.main) calls
main().catch(console.error) which only logs errors and leaves the process exit
code as success; change the catch to log the error and then terminate with a
non-zero exit code so CI/shell detect failures. Concretely, replace the single
.catch(console.error) on the main() invocation with a handler that does
console.error(err) and then exits with a non-zero status (use Deno.exit(1) when
Deno is available or process.exit(1) otherwise) while referencing the main()
invocation and the import.meta.main conditional so you update the correct call
site.
- Around line 82-90: The cloud branch currently returns the raw result from
dispatchToCloud(name, validatedInput, ctx) without validating it; update the
cloud path in runner.ts so that after awaiting dispatchToCloud(...) you run
action.outputSchema.parse(...) (same as the local path) and return the validated
output, ensuring any parse errors propagate; reference dispatchToCloud,
action.outputSchema.parse, and action.execute to align validation behavior
between cloud and local execution.
- Around line 124-126: The worker URL is being constructed incorrectly: update
the workerUrl construction so the Cloudflare workers subdomain always ends with
".workers.dev" (e.g. change the template using process.env.CF_ACCOUNT_SUBDOMAIN
so it becomes "...${process.env.CF_ACCOUNT_SUBDOMAIN || 'workers'}.workers.dev")
or, alternatively, update the comment to explicitly state that custom domains
(not .workers.dev) may be used; modify the code around the workerName and
workerUrl variables (workerName, workerUrl, CF_ACCOUNT_SUBDOMAIN) in runner.ts
to implement one of these fixes.
In @.opencode/PAI/ACTIONS/lib/types.ts:
- Line 19: The project imports zod at runtime in
.opencode/PAI/ACTIONS/lib/types.ts (import { z, type ZodType } from "zod"), but
zod is missing from .opencode/package.json dependencies; add "zod" to the
dependencies in that package.json (pin an appropriate version or use the repo's
shared versioning strategy) and run npm/yarn install to ensure the runtime can
resolve the Zod import used by types.ts.
In @.opencode/PAI/Tools/BuildOpenCode.ts:
- Around line 40-42: Die aktuelle Zuweisung an die Variable settings verwendet
JSON.parse ohne Fehlerbehandlung (bei SETTINGS_PATH), sodass ungültiges JSON
eine Exception wirft; ändere den Code so dass du das Einlesen und Parsen
(readFileSync + JSON.parse) in einen try-catch packst, bei einem Fehler auf ein
leeres Objekt {} zurückfällst und den Fehler kurz protokollierst (z.B.
console.warn oder vorhandener Logger) bevor du weiterfährst; referenziere die
bestehenden Symbole SETTINGS_PATH, readFileSync, JSON.parse und die Variable
settings beim Einbau der try-catch-Absicherung.
In @.opencode/skills/Research/Templates/ThreatLandscape.md:
- Around line 1-6: This markdown template lacks the required PAI skill metadata
and a "USE WHEN" trigger so it's not discoverable as a skill; add the standard
PAI skill frontmatter (e.g., PAI-SKILL name/version/description/inputs/outputs
or the project’s canonical metadata keys) at the top of ThreatLandscape.md and
include a clear "USE WHEN" section describing the conditions/triggers that
activate this skill; update or wrap the existing free text under the template
sections (e.g., "Threat Landscape Domain Template") to conform to the
repository's skill format so tooling can detect and apply the skill.
---
Minor comments:
In @.opencode/PAI/ACTIONS/lib/pipeline-runner.ts:
- Around line 110-114: Die Schleife, die CLI-Argumente in pipeline-runner.ts
verarbeitet (beginnend bei for (let i = 2; i < args.length; i += 2)), nimmt an,
dass stets ein Partnerwert für jeden Schlüssel existiert; prüfe vor dem Zugriff
auf args[i + 1], ob dieser definiert ist (oder ob args.length - i >= 2), und
handle fehlende Werte explizit (z. B. Fehler werfen oder den Schlüssel
überspringen), und nur wenn value !== undefined versuche JSON.parse; verweise
auf die Variablen/Begriffe args, input und die for-Schleife beim Einbau der
Validierung.
In @.opencode/PAI/ACTIONS/lib/runner.v2.ts:
- Around line 31-35: The createLocalLLM function currently imports Inference.ts
via a hardcoded path using process.env.HOME! which can throw if HOME is missing
or the file is absent; change it to resolve the module more robustly (e.g. read
a configurable env var like PAI_TOOLS_DIR with a sensible default, or use
require.resolve/attempt multiple candidate locations) and remove the non-null
assertion; wrap the dynamic import in a try/catch that validates the resolved
path exists and throws a clear, recoverable error if not found, and then extract
the inference export as before (referencing createLocalLLM and the inference
symbol) so callers get deterministic error messages instead of uncontrolled
exceptions.
In @.opencode/PAI/ACTIONS/pai.ts:
- Around line 144-148: Die JSON.parse-Aufrufe für stdinContent und options.input
(im Block, der stdinContent, options.input und extra prüft) werfen bei
ungültigem JSON unkontrolliert Exceptions; um das zu beheben, wickle die Parsers
in try/catch-Blöcke um (für stdinContent und options.input separat), fange
SyntaxError ab, nutze processLogger.error oder ähnliche Logging-Funktion, gib
eine klare, benutzerfreundliche Fehlermeldung inkl. kurzer Details zum
Parse-Fehler aus und brich dann sauber ab oder setze input auf einen definierten
Fallback; referenziere dabei die Variablen stdinContent, options.input und input
sowie den umgebenden Entscheidungsblock.
- Around line 215-216: Vermeide den Zugriff auf die internen Zod-Felder
inputSchema._def und outputSchema._def; stattdessen serialisiere die Schemas mit
einer dokumentierten Methode (z. B. 호출 zodToJsonSchema(action.inputSchema) /
zodToJsonSchema(action.outputSchema) wenn ihr die Bibliothek nutzt) oder
erweitere das Action-Objekt um explizite, öffentliche Felder wie
action.inputSchemaDef und action.outputSchemaDef die die serialisierbare
Repräsentation enthalten, und aktualisiere die Stelle, die momentan
inputSchema._def / outputSchema._def liest, so dass sie die neue, öffentliche
Quelle verwendet (referenz: action, inputSchema, outputSchema, inputSchemaDef,
outputSchemaDef, zodToJsonSchema).
In @.opencode/PAI/FLOWS.md:
- Around line 342-344: Die Anweisung im Abschnitt "High costs" ist falsch; statt
"Reduce `intervalMinutes` in `flow-index.json`" soll dort stehen, dass
`intervalMinutes` erhöht werden muss, um Kosten zu senken; bitte aktualisiere
den Text unter dem "High costs"-Header in .opencode/PAI/FLOWS.md so dass er sagt
"Increase `intervalMinutes` in `flow-index.json` and redeploy the Worker with
updated cron" (oder eine äquivalente Formulierung), und behalte die Referenz zu
`flow-index.json` und `intervalMinutes` bei.
In @.opencode/PAI/Tools/BuildOpenCode.ts:
- Line 18: Die Zuweisung von PAI_DIR verwendet process.env.HOME! which can be
undefined on some systems; replace it with a robust home-directory lookup (e.g.
use Node's os.homedir() or a fallback chain like process.env.HOME ||
process.env.USERPROFILE || os.homedir()) and remove the non-null assertion;
update the declaration of PAI_DIR (and any import of join) so it uses the
resolved home directory value instead of process.env.HOME!, referencing the
symbol PAI_DIR and the use of join/process.env.HOME in BuildOpenCode.ts.
In @.opencode/skills/Agents/ClaudeResearcherContext.md:
- Line 95: The fenced code block opening currently uses plain backticks (```)
with no language identifier; update the opening fence to include a language
(e.g., change the opening ``` to ```markdown) so the block that contains the "##
Research Report" heading is properly marked as Markdown and conforms to Markdown
standards.
In @.opencode/skills/Research/MigrationNotes.md:
- Around line 92-97: The "Success Criteria Met" section is inconsistent about
Conduct.md: locate the "Success Criteria Met" block (the heading and the bullet
"(5 total with conduct.md)") and reconcile it with the earlier statement that
"Conduct.md and PerplexityResearch.md were later removed"; either remove the
parenthetical "(5 total with conduct.md)" or update it to reflect the current
count and removal, and ensure mentions of "Conduct.md" and
"PerplexityResearch.md" in the document are consistent with the removal
statement.
In @.opencode/skills/Research/Templates/MarketResearch.md:
- Around line 1-3: Add a concise "USE WHEN" trigger section near the top of
MarketResearch.md that follows the PAI Skills format used in
.opencode/skills/**: a short heading "USE WHEN" plus 2–4 bullet criteria that
clearly signal when to pick this template (e.g., prompts about market sizing,
competitor analysis, go-to-market strategy, TAM/SAM/SOM, customer segments), and
include any required metadata tag or keyword list used by the skill selector so
the Skill-Flow can unambiguously choose MarketResearch.md over other Research
templates. Ensure the section is brief, uses the same casing/format as other
.opencode/skills/** templates, and references "MarketResearch.md" as the
document being updated.
- Around line 20-44: The template is missing CRITICAL/HIGH/MEDIUM/LOW evaluation
bullets for the Trends and Investors entity types; add two new sections titled
"Trends:" and "Investors:" in the same format as "Companies:", "Products:",
"People:", and "Technologies:" and provide four bullets each (CRITICAL, HIGH,
MEDIUM, LOW) that mirror the level-of-impact language used elsewhere (e.g., for
Trends: CRITICAL = market-defining macro trends, HIGH = rapidly accelerating
trends with broad adoption, MEDIUM = emerging trends with niche momentum, LOW =
speculative/short-lived buzz; for Investors: CRITICAL = top-tier strategic
funds/influential angels, HIGH = well-networked VCs with sector focus, MEDIUM =
active but lower-profile investors, LOW = sporadic/new/inactive investors).
In @.opencode/skills/Utilities/AudioEditor/Tools/Analyze.ts:
- Around line 265-273: The current JSON extraction in Analyze.ts uses a fragile
regex text.match(/\[[\s\S]*\]/) to populate edits (EditDecision[]) which breaks
for nested arrays or brackets in strings; update the parsing in the try block
used to set edits to first attempt extracting a JSON code block (match
triple-backtick blocks with optional language and capture the inner text), parse
that if present (trim before parsing), otherwise fall back to a safer extraction
(e.g., the existing array-match) and ensure JSON.parse errors are caught and
logged with the actual error; adjust the logic around the edits variable
assignment and the surrounding try/catch so parse failures log error details
instead of the bare " parse error" message.
In @.opencode/skills/Utilities/AudioEditor/Tools/Edit.ts:
- Around line 171-175: The ffprobe verification code (outProbe, outData,
outDuration, outSize) lacks error handling and will crash if ffprobe fails or
returns invalid JSON; wrap the ffprobe invocation and JSON.parse in a try/catch,
check the child's execution result (e.g., outProbe.exitCode or
outProbe.stderr/text) before parsing, validate that outData.format with numeric
duration and size fields exist (use Number.isFinite on parsed values) and throw
or return a clear, descriptive error if validation fails so subsequent logic
doesn't operate on undefined values.
- Line 64: Der Kommentar zu sampleRate ist irreführend: entweder aktualisiere
den Kommentar, sodass er nicht behauptet, der Wert werde aus dem Stream gelesen,
oder tatsächlich extrahiere die Sample-Rate aus dem ffprobe-/probeData-Output;
konkret: replace the hardcoded sampleRate = 48000 with logic that checks
probeData.streams?.[0]?.sample_rate and parses it to an integer (fallback to
48000), or if you keep the hardcode simply change the comment to state it's a
fixed default and not read from the stream; refer to the sampleRate variable and
probeData.streams[0].sample_rate when making the change.
In @.opencode/skills/Utilities/AudioEditor/Tools/Transcribe.ts:
- Around line 105-110: The code reads and parses outFile without verifying it
exists; wrap the read/parse in a try/catch and validate the result before using
it: attempt to read JSON from outFile (JSON.parse(await
Bun.file(outFile).text())) inside a try block, on error or if transcript is
falsy/has no chunks/text log a clear error including outFile and either throw or
return early, and compute chunkCount/textLen only after confirming transcript
and transcript.chunks/transcript.text are present; reference the outFile
variable and the transcript/chunkCount/textLen usage in Transcribe.ts when
adding this guard.
- Around line 54-61: Die Fallback-Prüfung verwendet hasFastWhisper, bleibt aber
true, wenn insanely-fast-whisper mit result.exitCode !== 0 fehlschlägt; ändere
die Fehlerbehandlung in der Stelle, die result verarbeitet (die if-else mit
result.exitCode), so dass im Fehlerfall entweder hasFastWhisper auf false
gesetzt wird oder ein eigenes Flag (z.B. useFallback) gesetzt wird und dann die
spätere Bedingung if (!hasFastWhisper || !existsSync(outFile)) zuverlässig
auslöst; referenziere dabei die Variablen result, hasFastWhisper und outFile
bzw. die Transcribe-Funktion/Block, damit der Fallback immer erfolgt, wenn
insanely-fast-whisper einen Fehler geliefert hat.
In @.opencode/skills/Utilities/Delegation/SKILL.md:
- Around line 60-78: Duplicate section numbering: both "Background Agents" and
"Foreground Agents" are labeled "3"; change "Foreground Agents" to "4" and then
increment subsequent section numbers accordingly ("Custom Agents" → 5, "Agent
Teams" → 6, "Parallel Task Dispatch" → 7). Update the Markdown headings for
"Foreground Agents", "Custom Agents", "Agent Teams", and "Parallel Task
Dispatch" so their numeric prefixes match the new sequence, ensuring consistency
for any references to those headings.
In `@docs/epic/TODO-v3.0.md`:
- Around line 170-179: Die statische Analyse meldet MD028 wegen einer Leerzeile
innerhalb eines Blockquotes — zwischen den beiden `> [!NOTE]`-Blöcken steht eine
leere Zeile, die ebenfalls mit `>` beginnt; entferne die leere `>`-Zeile oder
wandle sie in eine normale (ohne `>`) Leerzeile, oder fasse die beiden `>
[!NOTE]`-Blöcke zusammen, sodass keine leere `>`-Zeile mehr zwischen den
Blockquote-Zeilen (`> [!NOTE]`) steht.
---
Nitpick comments:
In @.opencode/PAI/ACTIONS/lib/pipeline-runner.ts:
- Around line 31-34: Der leere catch in der Funktion, die readFile und parseYaml
aufruft, schluckt fehlerhafte Zustände; ändere "catch {}" zu "catch (err)" und
gib den Fehler zumindest im Debug/Verbose-Modus aus (z.B.
console.error/console.debug oder vorhandenen Logger), z.B. in der Funktion, die
readFile(userPath, "utf-8") und parseYaml(content) aufruft (Rückgabewert
Pipeline), damit Berechtigungsfehler oder YAML-Parsing-Fehler protokolliert
werden; optional weiterreichen oder einen klaren Fehler/undefined zurückgeben
statt stumm zu ignorieren.
In @.opencode/PAI/ACTIONS/lib/runner.v2.ts:
- Around line 77-90: The shell capability (capabilities.shell) currently
executes untrusted strings via $`sh -c ${cmd}` and discards stderr on success;
change it to avoid shell interpolation for untrusted input and always capture
stderr: replace the $`sh -c ${cmd}` invocation with a safer execution strategy
(e.g., invoke Bun's process API or $ with argument-array style instead of
embedding ${cmd}) when running in cloud-mode or for user-provided input, and
update the success path to include any captured stderr and the real exit code
(use the result's stderr/exitCode accessors instead of always returning "" and
0). Ensure the guard uses the same identifier capabilities.shell and
remove/replace the $`sh -c ${cmd}` pattern to prevent shell injection while
preserving both stdout and stderr in the returned object.
- Around line 200-214: The simplified input validation branch currently only
checks for null/undefined and doesn't enforce declared types, and format
detection using !manifest.input.type is ambiguous; update the branch that
handles the simplified format (the loop over Object.entries(manifest.input) /
inputObj) to first detect simplified format by checking that manifest.input is
an object whose values are spec objects (e.g., typeof spec === "object" and
("type" in spec || "required" in spec)), then validate each field's type as well
as presence: for each [field, spec] validate required as before and validate
types using safe checks (typeof for "string"/"number"/"boolean", Array.isArray
for "array", and for "object" ensure non-null typeof === "object" and not
Array.isArray), returning the same { success: false, error: ... } message on
mismatch; keep the legacy branch (manifest.input?.type === "object") using
validateSchema(input, manifest.input) and inputValidation as-is so format
detection is explicit and unambiguous (refer to manifest.input, inputObj,
validateSchema, and inputValidation).
In @.opencode/PAI/ACTIONS/lib/types.v2.ts:
- Line 1: Die Datei types.v2.ts enthält eine unnötige Shebang-Zeile
"#!/usr/bin/env bun"; entferne diese Zeile aus der Datei so dass nur die
Typdefinitionen/Exports verbleiben (keine Änderung an exportierten Typen oder an
Funktionen vornehmen), überprüfe anschließend, dass keine Import-/Build-Skripte
erwarten, dass types.v2.ts ausführbar ist, und commite die bereinigte Datei ohne
Shebang.
- Around line 159-177: The validateSchema function currently creates a new Ajv
instance and compiles the schema on every call (Ajv, ajv, validate), which is
inefficient; refactor to create a single module-level Ajv instance and cache
compiled validators (e.g., a Map keyed by schema JSON or a WeakMap keyed by the
schema object) so validateSchema reuses the cached validator for a given schema
instead of recompiling each time, and keep the same return shape ({ valid,
errors }).
- Around line 68-90: The ActionContext interface in types.v2.ts conflicts with
the legacy ActionContext in types.ts; rename the exported interface in this file
(e.g., ActionContextV2 or ActionContextV2Static) and update any local
exports/imports to use the new name, or alternatively export it under a distinct
named export (export { ActionContext as ActionContextV2 }) and add a JSDoc note
explaining it’s the v2 shape; update all references that import the v2 shape to
the new identifier to avoid ambiguous/duplicate ActionContext symbols.
In @.opencode/PAI/ACTIONS/README.md:
- Around line 92-106: The ASCII pipe-model diagram block containing
A_FIRST_ACTION and A_SECOND_ACTION should include a language identifier on its
fenced code block; update the opening fence for that diagram from "```" to
"```text" so the diagram is explicitly marked as plain text (look for the block
that starts with the A_FIRST_ACTION / A_SECOND_ACTION diagram and change its
opening fence accordingly).
- Around line 13-28: The fenced ASCII-art code block containing the "PAI
ACTIONS" diagram lacks a language identifier; update the opening fence from ```
to ```text so the Markdown linter recognizes it as plain text—locate the block
starting with the line containing
"┌─────────────────────────────────────────────────────────┐" or the "PAI
ACTIONS" header and add the `text` language tag to the opening backticks to
resolve the warning.
- Around line 48-52: The fenced code block showing the directory structure for
"A_YOUR_ACTION/" should include a language identifier to improve rendering;
update the opening fence from ``` to ```text so the block starts with ```text
and retains the same contents (the lines with "A_YOUR_ACTION/", "├── action.json
# Manifest: ..." and "└── action.ts # Implementation: ...") in the
README.md code snippet.
In @.opencode/PAI/CLI.md:
- Around line 216-222: Der zweite Usage-Syntax-Block (der Block mit den Zeilen
starting mit "pai action <name> ..." und folgenden Zeilen "pai pipeline...",
"pai actions", "pai pipelines", "pai info <name>") fehlt ein Sprachbezeichner;
fix: prepend the opening fence with ```text (so the block starts with ```text)
and keep the closing ``` fence unchanged, ensuring the usage example is marked
as plain text.
- Around line 35-41: The usage code block containing the algorithm CLI examples
(the triple-backtick block that lists lines like "algorithm -m <mode> -p <PRD>
[-n N] [-a N] Run the Algorithm against a PRD" and the other command lines)
should include a language identifier (e.g., text or bash) after the opening ```
to enable proper syntax highlighting; update the opening fence from ``` to
```text (or ```bash) for that block so the Usage section renders with the
intended language hint.
- Around line 146-183: Das Output-Beispiel-Block (die ASCII-Box beginnend mit
"╔══════════════════════════════════════════════════════════════════════╗" und
dem Titel "THE ALGORITHM — Loop Mode") braucht einen Sprachbezeichner; ändern
Sie den Codeblock-Markup um von ``` auf ```text (und schließen Sie ihn weiterhin
mit ```), damit das Beispiel explizit als text markiert ist.
In @.opencode/PAI/CLIFIRSTARCHITECTURE.md:
- Around line 420-423: Replace the ungrammatical phrase in the "Assess Current
State" checklist—specifically change "if CLI-First would improve them" to a
correct construction such as "whether CLI-First would improve them" or "if
CLI-First could improve them" so the bullet reads e.g. "Evaluate whether
CLI-First would improve them" (locate the "Assess Current State" section and the
bullet starting with "Evaluate if CLI-First would improve them" to apply the
change).
In @.opencode/PAI/DOCUMENTATIONINDEX.md:
- Around line 39-46: Die Markdown-Codeblöcke für das "description" und das
"Example" USE WHEN-Format fehlen Sprachkennzeichnungen; füge jeweils ```yaml
anstelle von ``` vor dem Blockbeginn hinzu (betroffene Blöcke enthalten den Text
"description: [What it does]. USE WHEN [intent triggers using OR].
[Capabilities]." und das Beispiel "description: Complete blog workflow..."),
damit Markdown-Linter keine Warnungen mehr auslösen und YAML-Syntax-Highlighting
aktiviert wird.
In @.opencode/PAI/FLOWS.md:
- Around line 19-21: Markiere den ASCII-Diagramm-Block um die Zeile mit "Source
──(schedule)──> Pipeline ──> Destination" als plaintext/text, z.B. durch
Hinzufügen einer Sprachspezifikation hinter den ```-Fences (z. B. ```text),
damit Linter und Editoren das Diagramm konsistent behandeln; aktualisiere die
Code-Fence um den vorhandenen Block im FLOWS.md (die Zeile mit dem Diagramm) und
commite die Änderung.
In @.opencode/PAI/FLOWS/README.md:
- Around line 176-181: The README code block showing the worker directory tree
lacks a language identifier; update the fenced block that contains the
workers/f-your-flow/ snippet so the opening fence uses a language tag (e.g.,
change ``` to ```text) for the block that includes wrangler.jsonc and
src/index.ts to ensure proper rendering and syntax highlighting.
- Around line 260-283: The README's ASCII system-architecture diagram block
lacks a language identifier; update the triple-backtick fence wrapping the
diagram in the FLOWS README (the ASCII diagram starting with the box titled
"ARBOL") to include a language tag such as text (change ``` to ```text) and
ensure the closing fence remains ``` so the block is explicitly marked as plain
text for syntax-aware renderers/editors.
- Around line 21-40: The README's ASCII architecture diagram code fence is
missing a language identifier; update the fenced block that contains the ARBOL
diagram (the triple-backtick block showing "ARBOL (Cloudflare)" and the
FLOWS/PIPELINES/ACTIONS diagram) to include a language tag like ```text at the
opening fence (and keep the closing ```), so the diagram is explicitly marked as
plain text for rendering and syntax highlighting.
- Around line 9-15: Füge dem Architektur-Diagramm in der README einen
Sprachbezeichner hinzu: ändere den Block, der die Zeile "Source ──(schedule)──>
Pipeline ──> Destination" enthält, so dass der öffnende Codeblock einen
Identifier wie ```text erhält (z.B. ```text), damit das Diagramm als Plaintext
formatiert wird; belasse den Inhalt des Diagramms unverändert und nur den
Codeblock-Start anpassen.
In @.opencode/PAI/PIPELINES/README.md:
- Around line 37-42: Die Diagramm-Codeblock-Markierung sollte einen
Sprachbezeichner erhalten; ändere den vorhandenen dreifachen Backtick-Block (der
das Pipe-Modell-Diagramm enthält) von ``` auf ```text, sodass die Block-Header
wie ```text lautet und das Diagramm jetzt als Text gekennzeichnet ist.
- Around line 61-78: Add a language identifier to the architecture diagram code
block: locate the triple-backtick fence that begins the diagram (the block
showing "Client Pipeline Worker Action Workers")
and change the opening fence from ``` to ```text so the diagram is explicitly
labeled as plain text; ensure only the opening fence is updated and keep the
diagram content unchanged.
In @.opencode/PAI/README.md:
- Around line 13-23: Update the fenced directory-tree block so its opening fence
includes the language identifier `text` (i.e., change the opening "```" to
"```text") to ensure the directory structure is treated as plain text; keep the
existing closing fence and content unchanged—look for the triple-backtick fenced
block that contains the ~/.opencode/ tree listing to apply this change.
In @.opencode/PAI/SYSTEM_USER_EXTENDABILITY.md:
- Around line 11-14: Mehrere fenced code-Blöcke (z.B. den Block mit dem Inhalt
"SYSTEM tier → Base functionality, defaults, PAI updates" und ähnliche
Directory-/Struktur-Blöcke) fehlen eine Sprachkennung; füge die Kennung `text`
(oder `plaintext`) nach den dreifachen Backticks für alle genannten Blöcke
hinzu, um markdownlint-Warnungen zu beseitigen und die Darstellung zu
stabilisieren; suche nach alle ```-Blöcke im Dokument (insbesondere die Blöcke
mit kurzen Text-/Verzeichnis-Inhalten) und aktualisiere sie so: ```text ... ```;
stelle sicher, dass nur Directory-/Plaintext-Blöcke die `text` Kennung erhalten
und code-spezifische Blöcke unverändert bleiben.
In @.opencode/PAI/THENOTIFICATIONSYSTEM.md:
- Around line 24-26: Update the three fenced code blocks that currently use
triple backticks without a language (the block containing "[Doing what
{PRINCIPAL.NAME} asked]..." and the similar blocks in the later template area)
to include a language specifier such as "text" (e.g., change ``` to ```text) so
the blocks render with plain-text syntax highlighting; locate the literal code
block strings "[Doing what {PRINCIPAL.NAME} asked]..." and the template section
and add the language token to each opening fence.
In @.opencode/PAI/Tools/BuildOpenCode.ts:
- Around line 79-82: The code is re-parsing settings.json redundantly and
without error handling; instead of calling existsSync/readFileSync/JSON.parse
again, reuse the settings object returned by loadVariables() (or the
already-loaded variables map) and derive daName from that (e.g., use the
existing settings or variables lookup), remove the duplicate read/parse and add
a safe fallback for daidentity.name (keep default "Assistant") and ensure any
access to settings.daidentity is null-checked to avoid runtime errors.
In @.opencode/skills/Research/Templates/MarketResearch.md:
- Around line 50-55: The search patterns under "**For landscape (Step 1):**"
include hard-coded years (e.g. the string "[market] market size 2025 2026")
which will become stale; update these patterns to use placeholders like
{current_year} and {last_year} (or a single {year_range}) instead of literal
years and apply the same change to related patterns ("[market] competitive
landscape analysis", "Gartner|Forrester|IDC [market] analysis", etc.) so the
template dynamically substitutes the correct years at runtime.
In @.opencode/skills/Research/Templates/ThreatLandscape.md:
- Around line 50-55: Die Suchmuster in ThreatLandscape.md enthalten fest
verdrahtete Jahreszahlen (z.B. die Pattern "[sector] threat landscape 2025
2026") — ersetze solche konkreten Jahre durch dynamische Platzhalter oder
generische Formulierungen (z.B. "[current year]", "[previous year]" oder
"recent"/"[year]") in allen betroffenen Listeneinträgen wie "[sector] threat
landscape 2025 2026" und "ransomware trends [year]" bzw. passe "CISA advisories
[sector] recent" an, sodass das Template nicht mit der Zeit veraltet und in
Wiederverwendungsszenarien automatisch aktuelle Jahre/Zeiträume reflektiert.
In @.opencode/skills/Utilities/AudioEditor/Tools/Analyze.ts:
- Around line 298-307: Die aktuelle Merge-Logik (im Block mit merged, prev und
edit) konkatenniert prev.type wiederholt mit '+' und erzeugt sehr lange,
duplizierte Type-Strings; statt direkter String-Verkettung: zerlege prev.type
bei '+' in eine Menge (Set), füge edit.type hinzu, vereinige beides in einer
geordneten Liste ohne Duplikate und setze prev.type = Array.from(set).join('+');
optional kannst du die Liste auf eine vernünftige Länge (z.B. 10) beschränken
und mit "..." abkürzen, um extreme Fälle zu verhindern.
In @.opencode/skills/Utilities/AudioEditor/Tools/Pipeline.ts:
- Line 108: Die Zeile, die totalCut berechnet verwendet ein untypisiertes "any"
für die Variable edits; ersetze das durch ein korrekt typisiertes Edit-Interface
(z.B. EditDecision) und passe den Reduce-Typ an: importiere das Interface aus
Edit.ts oder definiere lokal ein interface mit start:number und end:number, dann
ändere die Reduce-Signatur zu (sum: number, e: EditDecision) => ... so dass
totalCut mit typisierten Edit-Objekten berechnet wird.
In @.opencode/skills/Utilities/AudioEditor/Tools/Polish.ts:
- Around line 109-111: Die Zeile that does "const uploadData = (await
uploadResponse.json()) as any;" ist unsicher; replace the "as any" with a proper
typed parse/validation: define an interface (e.g., UploadResponse { id?: string;
file_id?: string; ... }) and parse the JSON into UploadResponse (or use a
runtime type guard/validator) before extracting fileId from the object; update
the code that calls uploadResponse.json() and the subsequent fileId assignment
to use the validated/typed value (referencing uploadResponse.json(), uploadData
and fileId) so TypeScript enforces the API shape and avoids runtime errors.
- Around line 183-185: The size computation uses
Math.round(outputData.byteLength / 1024 / 1024) which shows 0MB for small files;
update the logging to compute a more user-friendly size (use bytes->KB/MB with
decimal precision or choose KB for sizes <1MB) and print that instead of sizeMB;
change the logic around outputData.byteLength and the two console.log calls that
reference sizeMB and outFile so they display either "XXX KB" for <1MB or "Y.Y
MB" with one decimal place for >=1MB while keeping the same outFile reference.
- Around line 22-51: Die Funktion loadEnv ist in Polish.ts dupliziert (identisch
zu der in Analyze.ts); extrahiere die Logik in ein gemeinsames Modul (z. B. ein
neues exportiertes loadEnv in einer Utility-Datei) und ersetze die lokalen
Implementierungen in beiden Dateien durch einen Import und Aufruf dieses
geteilten loadEnv; stelle sicher, dass die neue Implementierung das gleiche
Verhalten beibehält (Pfad-Bestimmung, Quote-Strip, env-Setzen nur wenn unset)
und exportiere die Funktion so, dass sowohl Polish.ts als auch Analyze.ts sie
verwenden können.
In @.opencode/skills/Utilities/AudioEditor/Tools/Transcribe.ts:
- Around line 32-33: Die Ableitung von outFile ist fragil, ersetze die manuelle
Split-Logik durch das path-Modul: verwende path.extname(inputFile) zur
Ermittlung der Extension und path.basename(inputFile, ext) für den Basisnamen,
dann setze outFile = outputPath || join(dirname(inputFile),
`${basenameWithoutExt}.transcript.json`), wobei basenameWithoutExt der mit
path.basename ermittelte Name ohne Extension ist; achte darauf, die Variablen
(inputFile, outputPath, outFile) und Funktionen (join, dirname, basename,
extname) aus dem Diff korrekt zu referenzieren.
In @.opencode/skills/Utilities/Delegation/SKILL.md:
- Around line 50-52: Die Code-Blöcke ohne Sprachangabe (z.B. den Block mit
Task(subagent_type="Engineer", isolation: "worktree", prompt="...") und die
weiteren unmarkierten Blöcke auf den genannten Stellen) sollten eine
Sprachkennzeichnung erhalten; öffne jeden betroffenen Code-Block und ersetze ```
durch ```text oder ```yaml je nach Inhalt (für Pseudo-Code `text`, für
Tool-Parameter-Syntax `yaml`) so dass z.B. ```Task(subagent_type="Engineer",
isolation: "worktree", prompt="...")``` zu
```text\nTask(subagent_type="Engineer", isolation: "worktree",
prompt="...")\n``` wird; wende das auf alle auftretenden unmarkierten Blöcke
(z.B. die Blöcke um Task(...) und die anderen in den Abschnitten) an.
| for (const actionName of pipeline.actions) { | ||
| console.error(`[pipeline] ${actionName}`); | ||
|
|
||
| const { runAction } = await import("./runner.v2"); | ||
| const result = await runAction(actionName, data); |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Dynamischer Import innerhalb der Schleife
Der dynamische Import von ./runner.v2 erfolgt bei jeder Iteration der Action-Schleife. Dies ist ineffizient und sollte vor der Schleife erfolgen.
♻️ Import vor die Schleife verschieben
export async function runPipeline(
name: string,
input: Record<string, unknown>
): Promise<{ success: boolean; output?: unknown; error?: string }> {
try {
const pipeline = await loadPipeline(name);
let data: unknown = input;
+ const { runAction } = await import("./runner.v2");
for (const actionName of pipeline.actions) {
console.error(`[pipeline] ${actionName}`);
- const { runAction } = await import("./runner.v2");
const result = await runAction(actionName, data);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.opencode/PAI/ACTIONS/lib/pipeline-runner.ts around lines 53 - 57, The loop
currently does a dynamic import of "./runner.v2" on every iteration; move the
import and destructuring (const { runAction } = await import("./runner.v2")) to
before the for (const actionName of pipeline.actions) loop so runAction is
resolved once and reused; then inside the loop just call await
runAction(actionName, data). Ensure any await/import error handling remains
appropriate around the single import.
| export async function loadAction(name: string): Promise<ActionSpec> { | ||
| // Convert category/name to path: parse/topic -> parse/topic.action.ts | ||
| const actionPath = join(ACTIONS_DIR, `${name}.action.ts`); | ||
|
|
||
| try { | ||
| const module = await import(actionPath); |
There was a problem hiding this comment.
loadAction gegen Pfad-Traversal absichern.
name fließt ungeprüft in den Importpfad. Eingaben wie ../lib/types verlassen ACTIONS_DIR und können beliebige Module laden.
🔒 Minimale Absicherung
export async function loadAction(name: string): Promise<ActionSpec> {
+ if (!/^[a-z0-9_-]+(?:\/[a-z0-9_-]+)*$/i.test(name)) {
+ throw new Error(`Invalid action name: ${name}`);
+ }
+
// Convert category/name to path: parse/topic -> parse/topic.action.ts
const actionPath = join(ACTIONS_DIR, `${name}.action.ts`);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.opencode/PAI/ACTIONS/lib/runner.ts around lines 30 - 35, The loadAction
function constructs actionPath from untrusted name and directly imports it,
enabling path-traversal; to fix, validate and canonicalize name before building
actionPath: reject names containing path separators that escape the actions tree
(e.g., '..' segments or absolute paths), allow only a safe whitelist pattern
(e.g., alphanumeric, dash, underscore and single '/' for category), then compute
the resolved path (use path.join/path.resolve) and assert the resolved path
startsWith the ACTIONS_DIR; if the check fails, throw an error and do not
import. Update references: loadAction, ACTIONS_DIR, actionPath and the dynamic
import call to use the validated/resolved path.
| # Threat Landscape Domain Template | ||
|
|
||
| Domain-specific configuration for the Deep Investigation workflow applied to cybersecurity threat analysis. | ||
|
|
||
| --- | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
PAI-Metadaten für die Skill-Erkennung ergänzen
Die Datei liegt unter .opencode/skills/**, startet aber direkt mit Freitext. Ohne USE WHEN-Trigger bzw. das erwartete Skill-Format ist sie für konsistente Discovery/Anwendung deutlich schwerer nutzbar.
As per coding guidelines, "Follow PAI Skills format and USE WHEN triggers in .opencode/skills/** files".
🧰 Tools
🪛 LanguageTool
[style] ~3-~3: Consider a different adjective to strengthen your wording.
Context: ... Domain-specific configuration for the Deep Investigation workflow applied to cyber...
(DEEP_PROFOUND)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.opencode/skills/Research/Templates/ThreatLandscape.md around lines 1 - 6,
This markdown template lacks the required PAI skill metadata and a "USE WHEN"
trigger so it's not discoverable as a skill; add the standard PAI skill
frontmatter (e.g., PAI-SKILL name/version/description/inputs/outputs or the
project’s canonical metadata keys) at the top of ThreatLandscape.md and include
a clear "USE WHEN" section describing the conditions/triggers that activate this
skill; update or wrap the existing free text under the template sections (e.g.,
"Threat Landscape Domain Template") to conform to the repository's skill format
so tooling can detect and apply the skill.
ADR-009 Compliance: - AudioEditor/Analyze.ts: Move all code to main(), process.exit only in import.meta.main - AudioEditor/Polish.ts: Move all code to main(), process.exit only in import.meta.main Security/Reliability Fixes (runner.ts): - Path traversal protection: validate action names with regex, check resolved path - Add AbortController timeout to fetch (30s default, configurable via ACTION_TIMEOUT_MS) - Validate cloud output with action.outputSchema.parse before returning - Fix input falsy check: use === undefined instead of !input (allows 0, false, '', null) - Add mode validation: only accept 'local' or 'cloud', reject invalid values - Fix process.exit(1) in import.meta.main catch block - Fix workerUrl construction: use subdomain from env with .workers.dev suffix Build/Config Fixes: - package.json: Add zod ^3.25.42 dependency - BuildOpenCode.ts: Add safeJsonParse with try-catch, fix HOME resolution with os.homedir()
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (4)
.opencode/PAI/Tools/BuildOpenCode.ts (2)
45-53:console.warnstattconsole.errorfür Warnung verwenden.Zeile 47 verwendet
console.errorfür eine nicht-kritische Warnung. Da das Programm mit einem Default-Wert weitermacht, wäreconsole.warnsemantisch passender und konsistenter mitsafeJsonParse.♻️ Vorgeschlagene Änderung
function getAlgorithmVersion(): string { if (!existsSync(LATEST_PATH)) { - console.error("⚠ PAI/Algorithm/LATEST not found, defaulting to v3.7.0"); + console.warn("⚠ PAI/Algorithm/LATEST not found, defaulting to v3.7.0"); return "v3.7.0"; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/PAI/Tools/BuildOpenCode.ts around lines 45 - 53, In getAlgorithmVersion(), replace the non-fatal console.error call that logs a missing LATEST_PATH with console.warn so the message is semantically a warning (since the function falls back to the default "v3.7.0"); update the call in the getAlgorithmVersion function that references LATEST_PATH to use console.warn and keep the same message text and behavior (returning "v3.7.0") to remain consistent with safeJsonParse-style warnings.
73-99: Redundantes Laden der Settings.
safeJsonParse(SETTINGS_PATH, {})wird sowohl in Zeile 80 (vialoadVariables()) als auch direkt in Zeile 94 aufgerufen. Für ein Build-Tool ist dies nicht kritisch, aber bei häufigeren Aufrufen könnte das zusammengefasst werden.♻️ Vorgeschlagene Optimierung
export function needsRebuild(): boolean { if (!existsSync(OUTPUT_PATH)) return true; if (!existsSync(TEMPLATE_PATH)) return false; // no template = nothing to build const outputContent = readFileSync(OUTPUT_PATH, "utf-8"); + const settings = safeJsonParse<Record<string, any>>(SETTINGS_PATH, {}); const variables = loadVariables(); // Check if any template variable appears unresolved in output for (const key of Object.keys(variables)) { if (outputContent.includes(key)) return true; } // Check if algorithm version in output matches LATEST const algoVersion = getAlgorithmVersion(); const algoPathPattern = /PAI\/Algorithm\/(.+?)\.md/; const match = outputContent.match(algoPathPattern); if (match && match[1] !== algoVersion) return true; // Check if DA name matches settings - const settings = safeJsonParse<Record<string, any>>(SETTINGS_PATH, {}); const daName = settings.daidentity?.name || "Assistant"; if (!outputContent.includes(`🗣️ ${daName}:`)) return true; return false; }Alternativ könnte
loadVariableserweitert werden, um die Settings zurückzugeben.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/PAI/Tools/BuildOpenCode.ts around lines 73 - 99, The function needsRebuild redundantly parses SETTINGS_PATH twice (inside loadVariables() and again via safeJsonParse in needsRebuild); update the code so settings are loaded only once: either have loadVariables() return both variables and the parsed settings (so needsRebuild can destructure { variables, settings }) or add a small helper getCachedSettings() that calls safeJsonParse once and is reused; update needsRebuild to use the single parsed settings (referencing needsRebuild, loadVariables, safeJsonParse, and SETTINGS_PATH) and remove the second safeJsonParse call..opencode/PAI/ACTIONS/lib/runner.ts (2)
90-97: Typkonvertierung fürprocess.envist ungenau.
process.envhat den TypRecord<string, string | undefined>, nichtRecord<string, string>. Die Umwandlung unterdrückt Typwarnungen, aber Zugriffe auf nicht existierende Umgebungsvariablen liefernundefined.♻️ Typsichere Alternative
const ctx: ActionContext = { mode, - env: options.env || process.env as Record<string, string>, + env: options.env || (process.env as Record<string, string | undefined>), trace: options.traceId ? {Alternativ kann das
ActionContext.env-Interface intypes.tsangepasst werden, umundefined-Werte explizit zu erlauben.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/PAI/ACTIONS/lib/runner.ts around lines 90 - 97, Die Typkonvertierung von process.env zu Record<string,string> ist unsicher; aktualisiere die Initialisierung des ActionContext (variable ctx) so dass env entweder verwendet wird wie options.env (wenn gesetzt) oder process.env mit dem korrekten Typ Record<string,string|undefined> bleibt, bzw. passe das ActionContext-Interface in types.ts so, dass env Werte vom Typ Record<string,string|undefined> akzeptiert; konkret: ändere die Zuweisung von env im Objekt ctx (und/oder passe ActionContext.env) statt eines Typecasts von process.env auf Record<string,string> zu verwenden, um optional undefined-Werte korrekt zu erlauben.
228-231: Pfadbehandlung ist nicht plattformübergreifend.
f.replace(ACTIONS_DIR + "/", "")setzt Unix-Pfadtrenner voraus. Auf Windows könntejoin()Backslashes verwenden, wodurch das Prefix nicht korrekt entfernt wird.♻️ Plattformunabhängige Lösung
+import { resolve, dirname, join, relative } from "path"; // ... return files.map(f => { - const relative = f.replace(ACTIONS_DIR + "/", "").replace(".action.ts", ""); - return relative; + const relPath = relative(ACTIONS_DIR, f).replace(".action.ts", ""); + // Normalize to forward slashes for consistent action names + return relPath.replace(/\\/g, "/"); });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/PAI/ACTIONS/lib/runner.ts around lines 228 - 231, The code in files.map uses f.replace(ACTIONS_DIR + "/", "") which assumes "/" separators and breaks on Windows; change to compute the path relative to ACTIONS_DIR using Node's path utilities (e.g., path.relative) and then strip the ".action.ts" suffix (e.g., with a /\.action\.ts$/ replace) so that the logic in files.map (and the variable relative) is platform-independent; update imports/usages where files.map, ACTIONS_DIR, and the ".action.ts" trimming occur.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.opencode/PAI/ACTIONS/lib/runner.ts:
- Around line 147-151: The timeout calculation using parseInt on
ctx.env.ACTION_TIMEOUT_MS can produce NaN and cause setTimeout to behave
incorrectly; update the logic around timeoutMs (where controller, timeoutId, and
ACTION_TIMEOUT_MS are used) to validate the parsed value and fall back to the
default 30000 when parseInt returns NaN or a non-positive value (use
Number.isFinite or isNaN checks), then use that validated timeoutMs when calling
setTimeout to abort the AbortController; keep the existing controller and
timeoutId usage but ensure the fallback prevents immediate/unpredictable aborts.
In @.opencode/skills/Utilities/AudioEditor/Tools/Analyze.ts:
- Around line 300-305: The merge logic for overlapping edits stores types as a
concatenated string on prev.type, and because it merely checks
prev.type.includes("+") it stops appending new edit types after the first merge;
instead, treat prev.type as a set: split prev.type on "+" (and trim), add
edit.type if not present, then rejoin with "+" so each unique type is preserved;
update the merge block around the merged array/prev variables (where prev.type
and edit.type are used) to build the deduplicated +-joined type string.
- Around line 264-287: Validate parsed model responses before deduplication:
after extracting edits (variable edits, type EditDecision) add strict shape and
value checks — ensure each edit has a known type (e.g., "cut", "fade" or your
allowed set), numeric start and end where end > start, confidence is a number in
[0,1], and start/end fall within the transcript duration (use the transcript
length variable or function available in this module). Reject or log any edit
that fails validation (including malformed types or out-of-range timestamps) and
skip adding it to allEdits; keep the JSON.parse try/catch but expand the catch
to log the raw text and parsing error for debugging. Ensure the validation
happens before the duplicate check and before pushing into allEdits so
edits.json cannot be polluted by invalid entries.
- Around line 220-293: The loop over windows currently logs API/parse errors but
continues so downstream code still reports "Saved..." even if some windows
failed; add a boolean flag (e.g., hadWindowError) outside the for-loop and set
it to true inside the response.ok failure branch, inside the JSON parse catch,
and inside the outer try/catch where errors are logged (refer to the response.ok
branch, the JSON parse try/catch around edits, and the outer try/catch that
surrounds the fetch); after the for-loop, check hadWindowError and if true,
abort the save flow (do not write allEdits) and exit non-zero or throw to signal
failure so incomplete analysis isn't treated as a successful run (ensure
references to allEdits, buildWindow, WINDOW_SIZE, OVERLAP remain unchanged).
- Around line 96-97: The computed outFile uses two unconditional .replace calls
on inputFile which causes "episode.transcript.json" to become
"episode.edits.edits.json"; change the logic so only one replacement is applied:
either replace /\.transcript\.json$/ with ".edits.json" and if that didn't match
then replace /\.json$/ with ".edits.json", or use a single regex that matches
either /\.transcript\.json$|\.json$/ and replaces with ".edits.json". Update the
assignment to outFile (the expression that currently chains two replace calls on
inputFile) to use this conditional or combined-regex approach so ".edits" is
appended only once.
In @.opencode/skills/Utilities/AudioEditor/Tools/Polish.ts:
- Around line 103-109: Der Upload-Request (uploadResponse = await
fetch(`${API_BASE}/upload`, ...)) und die anderen fetch-Aufrufe müssen mit einem
AbortController und einem harten Timeout versehen werden: erzeuge einen
AbortController vor dem fetch, übergebe controller.signal in den fetch-Optionen,
starte ein setTimeout das nach z.B. X ms controller.abort() aufruft, und löse
das Timeout mit clearTimeout nach Abschluss des Requests; fange Aborts/Errors
explizit ab und wirf oder handle sie passend, damit die Pipeline nicht
blockiert. Achte darauf, dieselbe Pattern auch für die weiteren fetch-Calls in
diesem Modul (die Edit/Status/Download-Requests) anzuwenden und keine Zeitgeber
offen zu lassen.
- Around line 86-90: Die aktuelle Zuweisung von outFile (Variablen: ext, base,
dir, outFile, outputPath, audioFile, join) kann die Eingabedatei überschreiben
wenn outputPath auf denselben Pfad wie audioFile zeigt; ändere die Logik so dass
du vor der Verwendung von outputPath die Pfade vergleichst (z.B. via resolve)
und falls sie identisch sind entweder einen alternativen Namen erzeugst (z.B.
join(dir, `${base}_polished${ext}`) ) oder einen Fehler wirfst und den Nutzer
aufforderst, einen anderen outputPath anzugeben; passe dieselbe Prüfung an allen
Stellen an, wo outFile gesetzt wird (inkl. der zuvor erwähnten Zeilen 200–201
Äquivalente).
---
Nitpick comments:
In @.opencode/PAI/ACTIONS/lib/runner.ts:
- Around line 90-97: Die Typkonvertierung von process.env zu
Record<string,string> ist unsicher; aktualisiere die Initialisierung des
ActionContext (variable ctx) so dass env entweder verwendet wird wie options.env
(wenn gesetzt) oder process.env mit dem korrekten Typ
Record<string,string|undefined> bleibt, bzw. passe das ActionContext-Interface
in types.ts so, dass env Werte vom Typ Record<string,string|undefined>
akzeptiert; konkret: ändere die Zuweisung von env im Objekt ctx (und/oder passe
ActionContext.env) statt eines Typecasts von process.env auf
Record<string,string> zu verwenden, um optional undefined-Werte korrekt zu
erlauben.
- Around line 228-231: The code in files.map uses f.replace(ACTIONS_DIR + "/",
"") which assumes "/" separators and breaks on Windows; change to compute the
path relative to ACTIONS_DIR using Node's path utilities (e.g., path.relative)
and then strip the ".action.ts" suffix (e.g., with a /\.action\.ts$/ replace) so
that the logic in files.map (and the variable relative) is platform-independent;
update imports/usages where files.map, ACTIONS_DIR, and the ".action.ts"
trimming occur.
In @.opencode/PAI/Tools/BuildOpenCode.ts:
- Around line 45-53: In getAlgorithmVersion(), replace the non-fatal
console.error call that logs a missing LATEST_PATH with console.warn so the
message is semantically a warning (since the function falls back to the default
"v3.7.0"); update the call in the getAlgorithmVersion function that references
LATEST_PATH to use console.warn and keep the same message text and behavior
(returning "v3.7.0") to remain consistent with safeJsonParse-style warnings.
- Around line 73-99: The function needsRebuild redundantly parses SETTINGS_PATH
twice (inside loadVariables() and again via safeJsonParse in needsRebuild);
update the code so settings are loaded only once: either have loadVariables()
return both variables and the parsed settings (so needsRebuild can destructure {
variables, settings }) or add a small helper getCachedSettings() that calls
safeJsonParse once and is reused; update needsRebuild to use the single parsed
settings (referencing needsRebuild, loadVariables, safeJsonParse, and
SETTINGS_PATH) and remove the second safeJsonParse call.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 98cdf0b0-0dec-4246-8b79-cdb8d7fa7bae
📒 Files selected for processing (5)
.opencode/PAI/ACTIONS/lib/runner.ts.opencode/PAI/Tools/BuildOpenCode.ts.opencode/skills/Utilities/AudioEditor/Tools/Analyze.ts.opencode/skills/Utilities/AudioEditor/Tools/Polish.tspackage.json
**Analyze.ts (AudioEditor skill):** - Fix outFile path generation (remove double .replace bug) - Add hadWindowError flag to abort save on window failures - Add strict validation for parsed model responses (type, bounds, confidence) - Fix merge logic for overlapping edits with type deduplication using Set **Polish.ts (AudioEditor skill):** - Add AbortController timeout to ALL ElevenLabs API calls - Add check to prevent outputPath from overwriting input file **runner.ts (ACTIONS lib):** - Add NaN check for ACTION_TIMEOUT_MS parseInt - Fix process.env type cast (ActionContext.env allows undefined) - Use path.relative instead of string replace for Windows compatibility **BuildOpenCode.ts:** - Change console.error to console.warn for missing LATEST file - Remove redundant settings.json parsing in needsRebuild() **types.ts (ACTIONS lib):** - Change ActionContext.env type to Record<string, string | undefined>
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
.opencode/PAI/ACTIONS/lib/runner.ts (1)
142-145: Worker-URL-Konstruktion entspricht nicht dem.workers.dev-Muster.Die URL wird als
https://pai-${workerName}.${subdomain}.workers.devkonstruiert. FallsCF_ACCOUNT_SUBDOMAINauf einen Wert wie"myaccount"gesetzt ist, entstehthttps://pai-foo.myaccount.workers.dev.Das ist technisch korrekt für Cloudflare Workers-Subdomains. Allerdings fehlt Dokumentation, die klarstellt, welcher Wert für
CF_ACCOUNT_SUBDOMAINerwartet wird (z.B. der Account-Subdomain-Teil vor.workers.dev).📝 Dokumentationsvorschlag
// Worker URL pattern: pai-{category}-{name}.{subdomain}.workers.dev + // CF_ACCOUNT_SUBDOMAIN should be your Cloudflare account subdomain + // (e.g., "myaccount" for myaccount.workers.dev, default: "workers") const workerName = name.replace("/", "-");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/PAI/ACTIONS/lib/runner.ts around lines 142 - 145, The worker URL construction in runner.ts uses workerName, subdomain and workerUrl and assumes CF_ACCOUNT_SUBDOMAIN contains the subdomain-prefix before ".workers.dev" (e.g. "myaccount"), but this is undocumented and can lead to confusion; update the code and docs by validating and normalizing CF_ACCOUNT_SUBDOMAIN (strip any trailing ".workers.dev" or full hostnames), add a short inline comment above the subdomain calculation explaining the expected value for CF_ACCOUNT_SUBDOMAIN, and update any README or the module's docstring to state that CF_ACCOUNT_SUBDOMAIN should be the account subdomain (the part before ".workers.dev"); also consider fallback behavior (keep current 'workers' default) and log a warning when an unexpected format is detected to help debugging..opencode/skills/Utilities/AudioEditor/Tools/Polish.ts (1)
203-206: Status-Check-Fehler werden weiterverarbeitet statt abzubrechen.Bei fehlgeschlagenen Status-Checks (Line 203-206) wird nur geloggt und die Polling-Schleife fortgesetzt. Das ist für temporäre Netzwerkfehler sinnvoll, aber bei wiederholten Fehlern könnte die Schleife 30 Minuten laufen, obwohl der Job tatsächlich fehlgeschlagen ist.
💡 Optional: Fehler-Counter für robustere Fehlerbehandlung
+ let consecutiveStatusErrors = 0; + const MAX_CONSECUTIVE_ERRORS = 5; + for (let i = 0; i < MAX_POLLS; i++) { await new Promise((resolve) => setTimeout(resolve, POLL_INTERVAL)); const statusResponse = await fetchWithTimeout(`${API_BASE}/edit/${editId}`, { method: "GET", headers: { "X-API-Key": apiKey }, }, STATUS_TIMEOUT); if (!statusResponse.ok) { console.error(`Status check failed: ${statusResponse.status}`); + consecutiveStatusErrors++; + if (consecutiveStatusErrors >= MAX_CONSECUTIVE_ERRORS) { + throw new Error(`Status check failed ${MAX_CONSECUTIVE_ERRORS} times consecutively`); + } continue; } + consecutiveStatusErrors = 0;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.opencode/skills/Utilities/AudioEditor/Tools/Polish.ts around lines 203 - 206, The polling loop currently logs failed status checks (statusResponse.ok) and continues, which can let a permanently failed job run the full timeout; modify the loop around statusResponse.ok to track consecutive failures (e.g., a local failureCount variable), increment it on each non-ok response, reset it on a successful check, and when failureCount exceeds a small threshold (e.g., 3) abort the loop by throwing an error or returning a failure result instead of continue; also include the statusResponse.status and attempt count in the console.error for better diagnostics and ensure you still short-circuit immediately for client errors (4xx) if appropriate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.opencode/skills/Utilities/AudioEditor/Tools/Analyze.ts:
- Around line 248-266: The fetch to Anthropic in Analyze.ts has no timeout; wrap
the request with an AbortController: create a controller and a timeout (e.g.,
const timeoutMs), pass controller.signal into the fetch options used to produce
response, use setTimeout(() => controller.abort(), timeoutMs) before the fetch
and clearTimeout when the fetch resolves, and update the surrounding error
handling (where response is awaited) to catch aborts (DOMException/AbortError)
and surface a clear timeout error instead of leaving the pipeline hanging.
---
Nitpick comments:
In @.opencode/PAI/ACTIONS/lib/runner.ts:
- Around line 142-145: The worker URL construction in runner.ts uses workerName,
subdomain and workerUrl and assumes CF_ACCOUNT_SUBDOMAIN contains the
subdomain-prefix before ".workers.dev" (e.g. "myaccount"), but this is
undocumented and can lead to confusion; update the code and docs by validating
and normalizing CF_ACCOUNT_SUBDOMAIN (strip any trailing ".workers.dev" or full
hostnames), add a short inline comment above the subdomain calculation
explaining the expected value for CF_ACCOUNT_SUBDOMAIN, and update any README or
the module's docstring to state that CF_ACCOUNT_SUBDOMAIN should be the account
subdomain (the part before ".workers.dev"); also consider fallback behavior
(keep current 'workers' default) and log a warning when an unexpected format is
detected to help debugging.
In @.opencode/skills/Utilities/AudioEditor/Tools/Polish.ts:
- Around line 203-206: The polling loop currently logs failed status checks
(statusResponse.ok) and continues, which can let a permanently failed job run
the full timeout; modify the loop around statusResponse.ok to track consecutive
failures (e.g., a local failureCount variable), increment it on each non-ok
response, reset it on a successful check, and when failureCount exceeds a small
threshold (e.g., 3) abort the loop by throwing an error or returning a failure
result instead of continue; also include the statusResponse.status and attempt
count in the console.error for better diagnostics and ensure you still
short-circuit immediately for client errors (4xx) if appropriate.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 605a964f-ce29-4dd4-a161-4386ba31235d
📒 Files selected for processing (5)
.opencode/PAI/ACTIONS/lib/runner.ts.opencode/PAI/ACTIONS/lib/types.ts.opencode/PAI/Tools/BuildOpenCode.ts.opencode/skills/Utilities/AudioEditor/Tools/Analyze.ts.opencode/skills/Utilities/AudioEditor/Tools/Polish.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- .opencode/PAI/Tools/BuildOpenCode.ts
| const response = await fetch("https://api.anthropic.com/v1/messages", { | ||
| method: "POST", | ||
| headers: { | ||
| "Content-Type": "application/json", | ||
| "x-api-key": apiKey, | ||
| "anthropic-version": "2023-06-01", | ||
| }, | ||
| body: JSON.stringify({ | ||
| model: "claude-sonnet-4-20250514", | ||
| max_tokens: 4096, | ||
| system: systemPrompt, | ||
| messages: [ | ||
| { | ||
| role: "user", | ||
| content: `Analyze this transcript section and return the JSON array of edits:\n\n${windowText}`, | ||
| }, | ||
| ], | ||
| }), | ||
| }); |
There was a problem hiding this comment.
Kein Timeout für Anthropic API-Aufrufe.
Der fetch-Aufruf an die Anthropic API hat kein hartes Timeout. Bei Netzwerkproblemen oder API-Latenz kann der Aufruf unbegrenzt blockieren und die Pipeline hängen lassen.
💡 Vorschlag: AbortController mit Timeout hinzufügen
+ const API_TIMEOUT_MS = 60000; // 60 seconds for LLM response
+ const controller = new AbortController();
+ const timeoutId = setTimeout(() => controller.abort(), API_TIMEOUT_MS);
+
try {
const response = await fetch("https://api.anthropic.com/v1/messages", {
method: "POST",
headers: {
"Content-Type": "application/json",
"x-api-key": apiKey,
"anthropic-version": "2023-06-01",
},
body: JSON.stringify({
model: "claude-sonnet-4-20250514",
max_tokens: 4096,
system: systemPrompt,
messages: [
{
role: "user",
content: `Analyze this transcript section and return the JSON array of edits:\n\n${windowText}`,
},
],
}),
+ signal: controller.signal,
});
+
+ clearTimeout(timeoutId);
if (!response.ok) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.opencode/skills/Utilities/AudioEditor/Tools/Analyze.ts around lines 248 -
266, The fetch to Anthropic in Analyze.ts has no timeout; wrap the request with
an AbortController: create a controller and a timeout (e.g., const timeoutMs),
pass controller.signal into the fetch options used to produce response, use
setTimeout(() => controller.abort(), timeoutMs) before the fetch and
clearTimeout when the fetch resolves, and update the surrounding error handling
(where response is awaited) to catch aborts (DOMException/AbortError) and
surface a clear timeout error instead of leaving the pipeline hanging.
Move AudioEditor from Utilities/AudioEditor (3 levels deep) to AudioEditor/ (2 levels) to fix skill validation error. The skill nesting limit is 2 levels (Category/Skill), and Utilities/AudioEditor/Tools was being detected as 3 levels. This fixes the CI validation failure for WP-C.
Move document processing skills from nested structure: Utilities/Documents/Xlsx → Utilities/Xlsx Utilities/Documents/Pdf → Utilities/Pdf Utilities/Documents/Docx → Utilities/Docx Utilities/Documents/Pptx → Utilities/Pptx This fixes CI validation errors for 3-level nesting. Documents skill remains as router with Workflows/. Fixes 4 CI validation errors: - Utilities/Documents/Xlsx too deep nesting - Utilities/Documents/Pdf too deep nesting - Utilities/Documents/Docx too deep nesting - Utilities/Documents/Pptx too deep nesting
Replace all JWT token and API key examples in documentation files with [EXAMPLE_*] placeholders to prevent false positives in CI secret scanning. Files updated: - VulnerabilityAnalysisGemini3.md: JWT examples - FfufGuide.md: JWT example - REQUEST_TEMPLATES.md: API key and JWT examples - API-TOOLS-GUIDE.md: API key examples - OsintTools/README.md: password example - write_nuclei_template_rule/system.md: JWT example This fixes CI failures caused by secret scan detecting example tokens in documentation as potential hardcoded secrets.
Fix remaining CI validation errors: 1. Telos/Telos/SKILL.md: Rename skill to 'TelosCore' to avoid duplicate with parent category 'Telos' 2. USMetrics/USMetrics/SKILL.md: Rename skill to 'USMetricsCore' to avoid duplicate with parent category 'USMetrics' 3. PAI/SKILL.md: Move frontmatter (---) to top of file before HTML comment. Validation requires frontmatter at file start. 4. Regenerate skill-index.json with corrected structure. Validation result: 0 errors, 3 warnings (non-blocking) Fixes CI failures in PR #45.
The find command was detecting test files in node_modules (zod package), causing the test job to run bun test which then failed because there are no actual project tests. Fix: Add grep -v node_modules to filter out dependency test files. Error was: - find: 'standard output': Broken pipe - bun test: No tests found! (exit code 1)
Summary
WP-C implementation — porting missing content from v4.0.3 upstream.
Changes
Notes
Ready for WP-D after merge.
Summary by CodeRabbit
New Features
Documentation
Other