diff --git a/.claude/prompts/nl-unity-claude-tests-mini.md b/.claude/prompts/nl-unity-claude-tests-mini.md deleted file mode 100644 index 35900b71..00000000 --- a/.claude/prompts/nl-unity-claude-tests-mini.md +++ /dev/null @@ -1,45 +0,0 @@ -# Unity NL Editing Suite — Natural Mode - -You are running inside CI for the **unity-mcp** repository. Your task is to demonstrate end‑to‑end **natural‑language code editing** on a representative Unity C# script using whatever capabilities and servers are already available in this session. Work autonomously. Do not ask the user for input. Do NOT spawn subagents, as they will not have access to the mcp server process on the top-level agent. - -## Mission -1) **Discover capabilities.** Quietly inspect the tools and any connected servers that are available to you at session start. If the server offers a primer or capabilities resource, read it before acting. -2) **Choose a target file.** Prefer `TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs` if it exists; otherwise choose a simple, safe C# script under `TestProjects/UnityMCPTests/Assets/`. -3) **Perform a small set of realistic edits** using minimal, precise changes (not full-file rewrites). Examples of small edits you may choose from (pick 3–6 total): - - Insert a new, small helper method (e.g., a logger or counter) in a sensible location. - - Add a short anchor comment near a key method (e.g., above `Update()`), then add or modify a few lines nearby. - - Append an end‑of‑class utility method (e.g., formatting or clamping helper). - - Make a safe, localized tweak to an existing method body (e.g., add a guard or a simple accumulator). - - Optionally include one idempotency/no‑op check (re‑apply an edit and confirm nothing breaks). -4) **Validate your edits.** Re‑read the modified regions and verify the changes exist, compile‑risk is low, and surrounding structure remains intact. -5) **Report results.** Produce both: - - A JUnit XML at `reports/junit-nl-suite.xml` containing a single suite named `UnityMCP.NL` with one test case per sub‑test you executed (mark pass/fail and include helpful failure text). - - A summary markdown at `reports/junit-nl-suite.md` that explains what you attempted, what succeeded/failed, and any follow‑ups you would try. -6) **Be gentle and reversible.** Prefer targeted, minimal edits; avoid wide refactors or non‑deterministic changes. - -## Assumptions & Hints (non‑prescriptive) -- A Unity‑oriented MCP server is expected to be connected. If a server‑provided **primer/capabilities** resource exists, read it first. If no primer is available, infer capabilities from your visible tools in the session. -- In CI/headless mode, when calling `mcp__unity__list_resources` or `mcp__unity__read_resource`, include: - - `ctx: {}` - - `project_root: "TestProjects/UnityMCPTests"` (the server will also accept the absolute path passed via env) - Example: `{ "ctx": {}, "under": "Assets/Scripts", "pattern": "*.cs", "project_root": "TestProjects/UnityMCPTests" }` -- If the preferred file isn’t present, locate a fallback C# file with simple, local methods you can edit safely. -- If a compile command is available in this environment, you may optionally trigger it; if not, rely on structural checks and localized validation. - -## Output Requirements (match NL suite conventions) -- JUnit XML at `$JUNIT_OUT` if set, otherwise `reports/junit-nl-suite.xml`. - - Single suite named `UnityMCP.NL`, one `` per sub‑test; include `` on errors. -- Markdown at `$MD_OUT` if set, otherwise `reports/junit-nl-suite.md`. - -Constraints (for fast publishing): -- Log allowed tools once as a single line: `AllowedTools: ...`. -- For every edit: Read → Write (with precondition hash) → Re‑read; on `{status:"stale_file"}` retry once after re‑read. -- Keep evidence to ±20–40 lines windows; cap unified diffs to 300 lines and note truncation. -- End `` with `VERDICT: PASS` or `VERDICT: FAIL`. - -## Guardrails -- No destructive operations. Keep changes minimal and well‑scoped. -- Don’t leak secrets or environment details beyond what’s needed in the reports. -- Work without user interaction; do not prompt for approval mid‑flow. - -> If capabilities discovery fails, still produce the two reports that clearly explain why you could not proceed and what evidence you gathered. diff --git a/.claude/prompts/nl-unity-suite-full.md b/.claude/prompts/nl-unity-suite-full.md deleted file mode 100644 index 1b46127a..00000000 --- a/.claude/prompts/nl-unity-suite-full.md +++ /dev/null @@ -1,234 +0,0 @@ -# Unity NL/T Editing Suite — CI Agent Contract - -You are running inside CI for the `unity-mcp` repo. Use only the tools allowed by the workflow. Work autonomously; do not prompt the user. Do NOT spawn subagents. - -**Print this once, verbatim, early in the run:** -AllowedTools: Write,Bash(printf:*),Bash(echo:*),Bash(scripts/nlt-revert.sh:*),mcp__unity__manage_editor,mcp__unity__list_resources,mcp__unity__read_resource,mcp__unity__apply_text_edits,mcp__unity__script_apply_edits,mcp__unity__validate_script,mcp__unity__find_in_file,mcp__unity__read_console,mcp__unity__get_sha - ---- - -## Mission -1) Pick target file (prefer): - - `unity://path/Assets/Scripts/LongUnityScriptClaudeTest.cs` -2) Execute **all** NL/T tests in order using minimal, precise edits. -3) Validate each edit with `mcp__unity__validate_script(level:"standard")`. -4) **Report**: write one `` XML fragment per test to `reports/_results.xml`. Do **not** read or edit `$JUNIT_OUT`. -5) **Restore** the file after each test using the OS‑level helper (fast), not a full‑file text write. - ---- - -## Environment & Paths (CI) -- Always pass: `project_root: "TestProjects/UnityMCPTests"` and `ctx: {}` on list/read/edit/validate. -- **Canonical URIs only**: - - Primary: `unity://path/Assets/...` (never embed `project_root` in the URI) - - Relative (when supported): `Assets/...` -- File paths for the helper script are workspace‑relative: - - `TestProjects/UnityMCPTests/Assets/...` - -CI provides: -- `$JUNIT_OUT=reports/junit-nl-suite.xml` (pre‑created; leave alone) -- `$MD_OUT=reports/junit-nl-suite.md` (synthesized from JUnit) -- Helper script: `scripts/nlt-revert.sh` (snapshot/restore) - ---- - -## Tool Mapping -- **Anchors/regex/structured**: `mcp__unity__script_apply_edits` - - Allowed ops: `anchor_insert`, `replace_range`, `regex_replace` (no overlapping ranges within a single call) -- **Precise ranges / atomic batch**: `mcp__unity__apply_text_edits` (non‑overlapping ranges) - - Multi‑span batches are computed from the same fresh read and sent atomically by default. - - Prefer `options.applyMode:"atomic"` when passing options for multiple spans; for single‑span, sequential is fine. -- **Hash-only**: `mcp__unity__get_sha` — returns `{sha256,lengthBytes,lastModifiedUtc}` without file body -- **Validation**: `mcp__unity__validate_script(level:"standard")` - - For edits, you may pass `options.validate`: - - `standard` (default): full‑file delimiter balance checks. - - `relaxed`: scoped checks for interior, non‑structural text edits; do not use for header/signature/brace‑touching changes. -- **Reporting**: `Write` small XML fragments to `reports/*_results.xml` -- **Editor state/flush**: `mcp__unity__manage_editor` (use sparingly; no project mutations) -- **Console readback**: `mcp__unity__read_console` (INFO capture only; do not assert in place of `validate_script`) -- **Snapshot/Restore**: `Bash(scripts/nlt-revert.sh:*)` - - For `script_apply_edits`: use `name` + workspace‑relative `path` only (e.g., `name="LongUnityScriptClaudeTest"`, `path="Assets/Scripts"`). Do not pass `unity://...` URIs as `path`. - - For `apply_text_edits` / `read_resource`: use the URI form only (e.g., `uri="unity://path/Assets/Scripts/LongUnityScriptClaudeTest.cs"`). Do not concatenate `Assets/` with a `unity://...` URI. - - Never call generic Bash like `mkdir`; the revert helper creates needed directories. Use only `scripts/nlt-revert.sh` for snapshot/restore. - - If you believe a directory is missing, you are mistaken: the workflow pre-creates it and the snapshot helper creates it if needed. Do not attempt any Bash other than scripts/nlt-revert.sh:*. - -### Structured edit ops (required usage) - -# Insert a helper RIGHT BEFORE the final class brace (NL‑3, T‑D) -1) Prefer `script_apply_edits` with a regex capture on the final closing brace: -```json -{"op":"regex_replace", - "pattern":"(?s)(\\r?\\n\\s*\\})\\s*$", - "replacement":"\\n // Tail test A\\n // Tail test B\\n // Tail test C\\1"} - -2) If the server returns `unsupported` (op not available) or `missing_field` (op‑specific), FALL BACK to - `apply_text_edits`: - - Find the last `}` in the file (class closing brace) by scanning from end. - - Insert the three comment lines immediately before that index with one non‑overlapping range. - -# Insert after GetCurrentTarget (T‑A/T‑E) -- Use `script_apply_edits` with: -```json -{"op":"anchor_insert","afterMethodName":"GetCurrentTarget","text":"private int __TempHelper(int a,int b)=>a+b;\\n"} -``` - -# Delete the temporary helper (T‑A/T‑E) -- Prefer structured delete: - - Use `script_apply_edits` with `{ "op":"delete_method", "className":"LongUnityScriptClaudeTest", "methodName":"PrintSeries" }` (or `__TempHelper` for T‑A). -- If structured delete is unavailable, fall back to `apply_text_edits` with a single `replace_range` spanning the exact method block (bounds computed from a fresh read); avoid whole‑file regex deletes. - -# T‑B (replace method body) -- Use `mcp__unity__apply_text_edits` with a single `replace_range` strictly inside the `HasTarget` braces. -- Compute start/end from a fresh `read_resource` at test start. Do not edit signature or header. -- On `{status:"stale_file"}` retry once with the server-provided hash; if absent, re-read once and retry. -- On `bad_request`: write the testcase with ``, restore, and continue to next test. -- On `missing_field`: FALL BACK per above; if the fallback also returns `unsupported` or `bad_request`, then fail as above. -> Don’t use `mcp__unity__create_script`. Avoid the header/`using` region entirely. - -Span formats for `apply_text_edits`: -- Prefer LSP ranges (0‑based): `{ "range": { "start": {"line": L, "character": C}, "end": {…} }, "newText": "…" }` -- Explicit fields are 1‑based: `{ "startLine": L1, "startCol": C1, "endLine": L2, "endCol": C2, "newText": "…" }` -- SDK preflights overlap after normalization; overlapping non‑zero spans → `{status:"overlap"}` with conflicts and no file mutation. -- Optional debug: pass `strict:true` to reject explicit 0‑based fields (else they are normalized and a warning is emitted). -- Apply mode guidance: router defaults to atomic for multi‑span; you can explicitly set `options.applyMode` if needed. - ---- - -## Output Rules (JUnit fragments only) -- For each test, create **one** file: `reports/_results.xml` containing exactly a single ` ... `. - Put human-readable lines (PLAN/PROGRESS/evidence) **inside** ``. - - If content contains `]]>`, split CDATA: replace `]]>` with `]]]]>`. -- Evidence windows only (±20–40 lines). If showing a unified diff, cap at 100 lines and note truncation. -- **Never** open/patch `$JUNIT_OUT` or `$MD_OUT`; CI merges fragments and synthesizes Markdown. - - Write destinations must match: `^reports/[A-Za-z0-9._-]+_results\.xml$` - - Snapshot files must live under `reports/_snapshots/` - - Reject absolute paths and any path containing `..` - - Reject control characters and line breaks in filenames; enforce UTF‑8 - - Cap basename length to ≤64 chars; cap any path segment to ≤100 and total path length to ≤255 - - Bash(printf|echo) must write to stdout only. Do not use shell redirection, here‑docs, or `tee` to create/modify files. The only allowed FS mutation is via `scripts/nlt-revert.sh`. - -**Example fragment** -```xml - - -... evidence windows ... -VERDICT: PASS -]]> - - -``` - -Note: Emit the PLAN line only in NL‑0 (do not repeat it for later tests). - - -### Fast Restore Strategy (OS‑level) - -- Snapshot once at NL‑0, then restore after each test via the helper. -- Snapshot (once after confirming the target): - ```bash - scripts/nlt-revert.sh snapshot "TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs" "reports/_snapshots/LongUnityScriptClaudeTest.cs.baseline" - ``` -- Log `snapshot_sha=...` printed by the script. -- Restore (after each mutating test): - ```bash - scripts/nlt-revert.sh restore "TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs" "reports/_snapshots/LongUnityScriptClaudeTest.cs.baseline" - ``` -- Then `read_resource` to confirm and (optionally) `validate_script(level:"standard")`. -- If the helper fails: fall back once to a guarded full‑file restore using the baseline bytes; then continue. - -### Guarded Write Pattern (for edits, not restores) - -- Before any mutation: `res = mcp__unity__read_resource(uri)`; `pre_sha = sha256(res.bytes)`. -- Write with `precondition_sha256 = pre_sha` on `apply_text_edits`/`script_apply_edits`. -- To compute `pre_sha` without reading file contents, you may instead call `mcp__unity__get_sha(uri).sha256`. -- On `{status:"stale_file"}`: - - Retry once using the server-provided hash (e.g., `data.current_sha256` or `data.expected_sha256`, per API schema). - - If absent, one re-read then a final retry. No loops. -- After success: immediately re-read via `res2 = mcp__unity__read_resource(uri)` and set `pre_sha = sha256(res2.bytes)` before any further edits in the same test. -- Prefer anchors (`script_apply_edits`) for end-of-class / above-method insertions. Keep edits inside method bodies. Avoid header/using. - -**On non‑JSON/transport errors (timeout, EOF, connection closed):** -- Write `reports/_results.xml` with a `` that includes a `` or `` node capturing the error text. -- Run the OS restore via `scripts/nlt-revert.sh restore …`. -- Continue to the next test (do not abort). - -**If any write returns `bad_request`, or `unsupported` after a fallback attempt:** -- Write `reports/_results.xml` with a `` that includes a `` node capturing the server error, include evidence, and end with `VERDICT: FAIL`. -- Run `scripts/nlt-revert.sh restore ...` and continue to the next test. -### Execution Order (fixed) - -- Run exactly: NL-0, NL-1, NL-2, NL-3, NL-4, T-A, T-B, T-C, T-D, T-E, T-F, T-G, T-H, T-I, T-J (15 total). -- Before NL-1..T-J: Bash(scripts/nlt-revert.sh:restore "" "reports/_snapshots/LongUnityScriptClaudeTest.cs.baseline") IF the baseline exists; skip for NL-0. -- NL-0 must include the PLAN line (len=15). -- After each testcase, include `PROGRESS: /15 completed`. - - -### Test Specs (concise) - -- NL‑0. Sanity reads — Tail ~120; ±40 around `Update()`. Then snapshot via helper. -- NL‑1. Replace/insert/delete — `HasTarget → return currentTarget != null;`; insert `PrintSeries()` after `GetCurrentTarget` logging "1,2,3"; verify; delete `PrintSeries()`; restore. -- NL‑2. Anchor comment — Insert `// Build marker OK` above `public void Update(...)`; restore. -- NL‑3. End‑of‑class — Insert `// Tail test A/B/C` (3 lines) before final brace; restore. -- NL‑4. Compile trigger — Record INFO only. - -### T‑A. Anchor insert (text path) — Insert helper after `GetCurrentTarget`; verify; delete via `regex_replace`; restore. -### T‑B. Replace body — Single `replace_range` inside `HasTarget`; restore. -- Options: pass {"validate":"relaxed"} for interior one-line edits. -### T‑C. Header/region preservation — Edit interior of `ApplyBlend`; preserve signature/docs/regions; restore. -- Options: pass {"validate":"relaxed"} for interior one-line edits. -### T‑D. End‑of‑class (anchor) — Insert helper before final brace; remove; restore. -### T‑E. Lifecycle — Insert → update → delete via regex; restore. -### T‑F. Atomic batch — One `mcp__unity__apply_text_edits` call (text ranges only) - - Compute all three edits from the **same fresh read**: - 1) Two small interior `replace_range` tweaks. - 2) One **end‑of‑class insertion**: find the **index of the final `}`** for the class; create a zero‑width range `[idx, idx)` and set `replacement` to the 3‑line comment block. - - Send all three ranges in **one call**, sorted **descending by start index** to avoid offset drift. - - Expect all‑or‑nothing semantics; on `{status:"overlap"}` or `{status:"bad_request"}`, write the testcase fragment with ``, **restore**, and continue. - - Options: pass {"applyMode":"atomic"} to enforce all‑or‑nothing. -- T‑G. Path normalization — Make the same edit with `unity://path/Assets/...` then `Assets/...`. Without refreshing `precondition_sha256`, the second attempt returns `{stale_file}`; retry with the server-provided hash to confirm both forms resolve to the same file. - -### T-H. Validation (standard) -- Restore baseline (helper call above). -- Perform a harmless interior tweak (or none), then MUST call: - mcp__unity__validate_script(level:"standard") -- Write the validator output to system-out; VERDICT: PASS if standard is clean, else include with the validator message and continue. - -### T-I. Failure surfaces (expected) -- Restore baseline. -- (1) OVERLAP: - * Fresh read of file; compute two interior ranges that overlap inside HasTarget. - * Prefer LSP ranges (0‑based) or explicit 1‑based fields; ensure both spans come from the same snapshot. - * Single mcp__unity__apply_text_edits call with both ranges. - * Expect `{status:"overlap"}` (SDK preflight) → record as PASS; else FAIL. Restore. -- (2) STALE_FILE: - * Fresh read → pre_sha. - * Make a tiny legit edit with pre_sha; success. - * Attempt another edit reusing the OLD pre_sha. - * Expect {status:"stale_file"} → record as PASS; else FAIL. Re-read to refresh, restore. - -### Per‑test error handling and recovery -- For each test (NL‑0..T‑J), use a try/finally pattern: - - Always write a testcase fragment and perform restore in finally, even when tools return error payloads. - - try: run the test steps; always write `reports/_results.xml` with PASS/FAIL/ERROR - - finally: run Bash(scripts/nlt-revert.sh:restore …baseline) to restore the target file -- On any transport/JSON/tool exception: - - catch and write a `` fragment with an `` node (include the message), then proceed to the next test. -- After NL‑4 completes, proceed directly to T‑A regardless of any earlier validator warnings (do not abort the run). -- (3) USING_GUARD (optional): - * Attempt a 1-line insert above the first 'using'. - * Expect {status:"using_guard"} → record as PASS; else note 'not emitted'. Restore. - -### T-J. Idempotency -- Restore baseline. -- Repeat a replace_range twice (second call may be noop). Validate standard after each. -- Insert or ensure a tiny comment, then delete it twice (second delete may be noop). -- Restore and PASS unless an error/structural break occurred. - - -### Status & Reporting - -- Safeguard statuses are non‑fatal; record and continue. -- End each testcase `` with `VERDICT: PASS` or `VERDICT: FAIL`. \ No newline at end of file diff --git a/.claude/prompts/nl-unity-suite-full-additive.md b/.claude/prompts/nl-unity-suite-nl.md similarity index 57% rename from .claude/prompts/nl-unity-suite-full-additive.md rename to .claude/prompts/nl-unity-suite-nl.md index f4c65fe6..1064e4d3 100644 --- a/.claude/prompts/nl-unity-suite-full-additive.md +++ b/.claude/prompts/nl-unity-suite-nl.md @@ -1,4 +1,4 @@ -# Unity NL/T Editing Suite — Additive Test Design +# Unity NL Editing Suite — Additive Test Design You are running inside CI for the `unity-mcp` repo. Use only the tools allowed by the workflow. Work autonomously; do not prompt the user. Do NOT spawn subagents. @@ -10,10 +10,28 @@ AllowedTools: Write,mcp__unity__manage_editor,mcp__unity__list_resources,mcp__un ## Mission 1) Pick target file (prefer): - `unity://path/Assets/Scripts/LongUnityScriptClaudeTest.cs` -2) Execute **all** NL/T tests in order using minimal, precise edits that **build on each other**. +2) Execute NL tests NL-0..NL-4 in order using minimal, precise edits that build on each other. 3) Validate each edit with `mcp__unity__validate_script(level:"standard")`. 4) **Report**: write one `` XML fragment per test to `reports/_results.xml`. Do **not** read or edit `$JUNIT_OUT`. + +**CRITICAL XML FORMAT REQUIREMENTS:** +- Each file must contain EXACTLY one `` root element +- NO prologue, epilogue, code fences, or extra characters +- NO markdown formatting or explanations outside the XML +- Use this exact format: + +```xml + + + +``` + +- If test fails, include: `` +- TESTID must be one of: NL-0, NL-1, NL-2, NL-3, NL-4 5) **NO RESTORATION** - tests build additively on previous state. +6) **STRICT FRAGMENT EMISSION** - After each test, immediately emit a clean XML file under `reports/_results.xml` with exactly one `` whose `name` begins with the exact test id. No prologue/epilogue or fences. If the test fails, include a `` and still emit. --- @@ -29,10 +47,26 @@ CI provides: --- +## Transcript Minimization Rules +- Do not restate tool JSON; summarize in ≤ 2 short lines. +- Never paste full file contents. For matches, include only the matched line and ±1 line. +- Prefer `mcp__unity__find_in_file` for targeting; avoid `mcp__unity__read_resource` unless strictly necessary. If needed, limit to `head_bytes ≤ 256` or `tail_lines ≤ 10`. +- Per‑test `system-out` ≤ 400 chars: brief status only (no SHA). +- Console evidence: fetch the last 10 lines with `include_stacktrace:false` and include ≤ 3 lines in the fragment. +- Avoid quoting multi‑line diffs; reference markers instead. +— Console scans: perform two reads — last 10 `log/info` lines and up to 3 `error` entries (use `include_stacktrace:false`); include ≤ 3 lines total in the fragment; if no errors, state "no errors". + +--- + ## Tool Mapping - **Anchors/regex/structured**: `mcp__unity__script_apply_edits` - Allowed ops: `anchor_insert`, `replace_method`, `insert_method`, `delete_method`, `regex_replace` + - For `anchor_insert`, always set `"position": "before"` or `"after"`. - **Precise ranges / atomic batch**: `mcp__unity__apply_text_edits` (non‑overlapping ranges) +STRICT OP GUARDRAILS +- Do not use `anchor_replace`. Structured edits must be one of: `anchor_insert`, `replace_method`, `insert_method`, `delete_method`, `regex_replace`. +- For multi‑spot textual tweaks in one operation, compute non‑overlapping ranges with `mcp__unity__find_in_file` and use `mcp__unity__apply_text_edits`. + - **Hash-only**: `mcp__unity__get_sha` — returns `{sha256,lengthBytes,lastModifiedUtc}` without file body - **Validation**: `mcp__unity__validate_script(level:"standard")` - **Dynamic targeting**: Use `mcp__unity__find_in_file` to locate current positions of methods/markers @@ -49,7 +83,7 @@ CI provides: 5. **Composability**: Tests demonstrate how operations work together in real workflows **State Tracking:** -- Track file SHA after each test to ensure operations succeeded +- Track file SHA after each test (`mcp__unity__get_sha`) for potential preconditions in later passes. Do not include SHA values in report fragments. - Use content signatures (method names, comment markers) to verify expected state - Validate structural integrity after each major change @@ -85,7 +119,8 @@ CI provides: ### NL-3. End-of-Class Content (Additive State C) **Goal**: Demonstrate end-of-class insertions with smart brace matching **Actions**: -- Use anchor pattern to find the class-ending brace (accounts for previous additions) +- Match the final class-closing brace by scanning from EOF (e.g., last `^\s*}\s*$`) + or compute via `find_in_file` + ranges; insert immediately before it. - Insert three comment lines before final class brace: ``` // Tail test A @@ -97,95 +132,11 @@ CI provides: ### NL-4. Console State Verification (No State Change) **Goal**: Verify Unity console integration without file modification **Actions**: -- Read Unity console messages (INFO level) +- Read last 10 Unity console lines (log/info) +- Perform a targeted scan for errors/exceptions (type: errors), up to 3 entries - Validate no compilation errors from previous operations - **Expected final state**: State C (unchanged) - -### T-A. Temporary Helper Lifecycle (Returns to State C) -**Goal**: Test insert → verify → delete cycle for temporary code -**Actions**: -- Find current position of `GetCurrentTarget()` method (may have shifted from NL-2 comment) -- Insert temporary helper: `private int __TempHelper(int a, int b) => a + b;` -- Verify helper method exists and compiles -- Delete helper method via structured delete operation -- **Expected final state**: Return to State C (helper removed, other changes intact) - -### T-B. Method Body Interior Edit (Additive State D) -**Goal**: Edit method interior without affecting structure, on modified file -**Actions**: -- Use `find_in_file` to locate current `HasTarget()` method (modified in NL-1) -- Edit method body interior: change return statement to `return true; /* test modification */` -- Use `validate: "relaxed"` for interior-only edit -- Verify edit succeeded and file remains balanced -- **Expected final state**: State C + modified HasTarget() body - -### T-C. Different Method Interior Edit (Additive State E) -**Goal**: Edit a different method to show operations don't interfere -**Actions**: -- Locate `ApplyBlend()` method using content search -- Edit interior line to add null check: `if (animator == null) return; // safety check` -- Preserve method signature and structure -- **Expected final state**: State D + modified ApplyBlend() method - -### T-D. End-of-Class Helper (Additive State F) -**Goal**: Add permanent helper method at class end -**Actions**: -- Use smart anchor matching to find current class-ending brace (after NL-3 tail comments) -- Insert permanent helper before class brace: `private void TestHelper() { /* placeholder */ }` -- **Expected final state**: State E + TestHelper() method before class end - -### T-E. Method Evolution Lifecycle (Additive State G) -**Goal**: Insert → modify → finalize a method through multiple operations -**Actions**: -- Insert basic method: `private int Counter = 0;` -- Update it: find and replace with `private int Counter = 42; // initialized` -- Add companion method: `private void IncrementCounter() { Counter++; }` -- **Expected final state**: State F + Counter field + IncrementCounter() method - -### T-F. Atomic Multi-Edit (Additive State H) -**Goal**: Multiple coordinated edits in single atomic operation -**Actions**: -- Read current file state to compute precise ranges -- Atomic edit combining: - 1. Add comment in `HasTarget()`: `// validated access` - 2. Add comment in `ApplyBlend()`: `// safe animation` - 3. Add final class comment: `// end of test modifications` -- All edits computed from same file snapshot, applied atomically -- **Expected final state**: State G + three coordinated comments - -### T-G. Path Normalization Test (No State Change) -**Goal**: Verify URI forms work equivalently on modified file -**Actions**: -- Make identical edit using `unity://path/Assets/Scripts/LongUnityScriptClaudeTest.cs` -- Then using `Assets/Scripts/LongUnityScriptClaudeTest.cs` -- Second should return `stale_file`, retry with updated SHA -- Verify both URI forms target same file -- **Expected final state**: State H (no content change, just path testing) - -### T-H. Validation on Modified File (No State Change) -**Goal**: Ensure validation works correctly on heavily modified file -**Actions**: -- Run `validate_script(level:"standard")` on current state -- Verify no structural errors despite extensive modifications -- **Expected final state**: State H (validation only, no edits) - -### T-I. Failure Surface Testing (No State Change) -**Goal**: Test error handling on real modified file -**Actions**: -- Attempt overlapping edits (should fail cleanly) -- Attempt edit with stale SHA (should fail cleanly) -- Verify error responses are informative -- **Expected final state**: State H (failed operations don't modify file) - -### T-J. Idempotency on Modified File (Additive State I) -**Goal**: Verify operations behave predictably when repeated -**Actions**: -- Add unique marker comment: `// idempotency test marker` -- Attempt to add same comment again (should detect no-op) -- Remove marker, attempt removal again (should handle gracefully) -- **Expected final state**: State H + verified idempotent behavior - ---- +- **IMMEDIATELY** write clean XML fragment to `reports/NL-4_results.xml` (no extra text). The `` must start with `NL-4`. Include at most 3 lines total across both reads, or simply state "no errors; console OK" (≤ 400 chars). ## Dynamic Targeting Examples @@ -219,7 +170,8 @@ find_in_file(pattern: "public bool HasTarget\\(\\)") 1. Verify expected content exists: `find_in_file` for key markers 2. Check structural integrity: `validate_script(level:"standard")` 3. Update SHA tracking for next test's preconditions -4. Log cumulative changes in test evidence +4. Emit a per‑test fragment to `reports/_results.xml` immediately. If the test failed, still write a single `` with a `` and evidence in `system-out`. +5. Log cumulative changes in test evidence (keep concise per Transcript Minimization Rules; never paste raw tool JSON) **Error Recovery:** - If test fails, log current state but continue (don't restore) @@ -237,4 +189,12 @@ find_in_file(pattern: "public bool HasTarget\\(\\)") 5. **Better Failure Analysis**: Failures don't cascade - each test adapts to current reality 6. **State Evolution Testing**: Validates SDK handles cumulative file modifications correctly -This additive approach produces a more realistic and maintainable test suite that better represents actual SDK usage patterns. \ No newline at end of file +This additive approach produces a more realistic and maintainable test suite that better represents actual SDK usage patterns. + +--- + +BAN ON EXTRA TOOLS AND DIRS +- Do not use any tools outside `AllowedTools`. Do not create directories; assume `reports/` exists. + +--- + diff --git a/.claude/prompts/nl-unity-suite-t.md b/.claude/prompts/nl-unity-suite-t.md new file mode 100644 index 00000000..c7f78031 --- /dev/null +++ b/.claude/prompts/nl-unity-suite-t.md @@ -0,0 +1,305 @@ +# Unity T Editing Suite — Additive Test Design +You are running inside CI for the `unity-mcp` repo. Use only the tools allowed by the workflow. Work autonomously; do not prompt the user. Do NOT spawn subagents. + +**Print this once, verbatim, early in the run:** +AllowedTools: Write,mcp__unity__manage_editor,mcp__unity__list_resources,mcp__unity__read_resource,mcp__unity__apply_text_edits,mcp__unity__script_apply_edits,mcp__unity__validate_script,mcp__unity__find_in_file,mcp__unity__read_console,mcp__unity__get_sha + +--- + +## Mission +1) Pick target file (prefer): + - `unity://path/Assets/Scripts/LongUnityScriptClaudeTest.cs` +2) Execute T tests T-A..T-J in order using minimal, precise edits that build on the NL pass state. +3) Validate each edit with `mcp__unity__validate_script(level:"standard")`. +4) **Report**: write one `` XML fragment per test to `reports/_results.xml`. Do **not** read or edit `$JUNIT_OUT`. + +**CRITICAL XML FORMAT REQUIREMENTS:** +- Each file must contain EXACTLY one `` root element +- NO prologue, epilogue, code fences, or extra characters +- NO markdown formatting or explanations outside the XML +- Use this exact format: + +```xml + + + +``` + +- If test fails, include: `` +- TESTID must be one of: T-A, T-B, T-C, T-D, T-E, T-F, T-G, T-H, T-I, T-J +5) **NO RESTORATION** - tests build additively on previous state. +6) **STRICT FRAGMENT EMISSION** - After each test, immediately emit a clean XML file under `reports/_results.xml` with exactly one `` whose `name` begins with the exact test id. No prologue/epilogue or fences. If the test fails, include a `` and still emit. + +--- + +## Environment & Paths (CI) +- Always pass: `project_root: "TestProjects/UnityMCPTests"` and `ctx: {}` on list/read/edit/validate. +- **Canonical URIs only**: + - Primary: `unity://path/Assets/...` (never embed `project_root` in the URI) + - Relative (when supported): `Assets/...` + +CI provides: +- `$JUNIT_OUT=reports/junit-nl-suite.xml` (pre‑created; leave alone) +- `$MD_OUT=reports/junit-nl-suite.md` (synthesized from JUnit) + +--- + +## Transcript Minimization Rules +- Do not restate tool JSON; summarize in ≤ 2 short lines. +- Never paste full file contents. For matches, include only the matched line and ±1 line. +- Prefer `mcp__unity__find_in_file` for targeting; avoid `mcp__unity__read_resource` unless strictly necessary. If needed, limit to `head_bytes ≤ 256` or `tail_lines ≤ 10`. +- Per‑test `system-out` ≤ 400 chars: brief status only (no SHA). +- Console evidence: fetch the last 10 lines with `include_stacktrace:false` and include ≤ 3 lines in the fragment. +- Avoid quoting multi‑line diffs; reference markers instead. +— Console scans: perform two reads — last 10 `log/info` lines and up to 3 `error` entries (use `include_stacktrace:false`); include ≤ 3 lines total in the fragment; if no errors, state "no errors". +— Final check is folded into T‑J: perform an errors‑only scan (with `include_stacktrace:false`) and include a single "no errors" line or up to 3 error lines within the T‑J fragment. + +--- + +## Tool Mapping +- **Anchors/regex/structured**: `mcp__unity__script_apply_edits` + - Allowed ops: `anchor_insert`, `replace_method`, `insert_method`, `delete_method`, `regex_replace` + - For `anchor_insert`, always set `"position": "before"` or `"after"`. +- **Precise ranges / atomic batch**: `mcp__unity__apply_text_edits` (non‑overlapping ranges) +STRICT OP GUARDRAILS +- Do not use `anchor_replace`. Structured edits must be one of: `anchor_insert`, `replace_method`, `insert_method`, `delete_method`, `regex_replace`. +- For multi‑spot textual tweaks in one operation, compute non‑overlapping ranges with `mcp__unity__find_in_file` and use `mcp__unity__apply_text_edits`. + +- **Hash-only**: `mcp__unity__get_sha` — returns `{sha256,lengthBytes,lastModifiedUtc}` without file body +- **Validation**: `mcp__unity__validate_script(level:"standard")` +- **Dynamic targeting**: Use `mcp__unity__find_in_file` to locate current positions of methods/markers + +--- + +## Additive Test Design Principles + +**Key Changes from Reset-Based:** +1. **Dynamic Targeting**: Use `find_in_file` to locate methods/content, never hardcode line numbers +2. **State Awareness**: Each test expects the file state left by the previous test +3. **Content-Based Operations**: Target methods by signature, classes by name, not coordinates +4. **Cumulative Validation**: Ensure the file remains structurally sound throughout the sequence +5. **Composability**: Tests demonstrate how operations work together in real workflows + +**State Tracking:** +- Track file SHA after each test (`mcp__unity__get_sha`) and use it as a precondition + for `apply_text_edits` in T‑F/T‑G/T‑I to exercise `stale_file` semantics. Do not include SHA values in report fragments. +- Use content signatures (method names, comment markers) to verify expected state +- Validate structural integrity after each major change + +--- + +### T-A. Temporary Helper Lifecycle (Returns to State C) +**Goal**: Test insert → verify → delete cycle for temporary code +**Actions**: +- Find current position of `GetCurrentTarget()` method (may have shifted from NL-2 comment) +- Insert temporary helper: `private int __TempHelper(int a, int b) => a + b;` +- Verify helper method exists and compiles +- Delete helper method via structured delete operation +- **Expected final state**: Return to State C (helper removed, other changes intact) + +### Late-Test Editing Rule +- When modifying a method body, use `mcp__unity__script_apply_edits`. If the method is expression-bodied (`=>`), convert it to a block or replace the whole method definition. After the edit, run `mcp__unity__validate_script` and rollback on error. Use `//` comments in inserted code. + +### T-B. Method Body Interior Edit (Additive State D) +**Goal**: Edit method interior without affecting structure, on modified file +**Actions**: +- Use `find_in_file` to locate current `HasTarget()` method (modified in NL-1) +- Edit method body interior: change return statement to `return true; /* test modification */` +- Validate with `mcp__unity__validate_script(level:"standard")` for consistency +- Verify edit succeeded and file remains balanced +- **Expected final state**: State C + modified HasTarget() body + +### T-C. Different Method Interior Edit (Additive State E) +**Goal**: Edit a different method to show operations don't interfere +**Actions**: +- Locate `ApplyBlend()` method using content search +- Edit interior line to add null check: `if (animator == null) return; // safety check` +- Preserve method signature and structure +- **Expected final state**: State D + modified ApplyBlend() method + +### T-D. End-of-Class Helper (Additive State F) +**Goal**: Add permanent helper method at class end +**Actions**: +- Use smart anchor matching to find current class-ending brace (after NL-3 tail comments) +- Insert permanent helper before class brace: `private void TestHelper() { /* placeholder */ }` +- Validate with `mcp__unity__validate_script(level:"standard")` +- **IMMEDIATELY** write clean XML fragment to `reports/T-D_results.xml` (no extra text). The `` must start with `T-D`. Include brief evidence in `system-out`. +- **Expected final state**: State E + TestHelper() method before class end + +### T-E. Method Evolution Lifecycle (Additive State G) +**Goal**: Insert → modify → finalize a field + companion method +**Actions**: +- Insert field: `private int Counter = 0;` +- Update it: find and replace with `private int Counter = 42; // initialized` +- Add companion method: `private void IncrementCounter() { Counter++; }` +- **Expected final state**: State F + Counter field + IncrementCounter() method + +### T-F. Atomic Multi-Edit (Additive State H) +**Goal**: Multiple coordinated edits in single atomic operation +**Actions**: +- Read current file state to compute precise ranges +- Atomic edit combining: + 1. Add comment in `HasTarget()`: `// validated access` + 2. Add comment in `ApplyBlend()`: `// safe animation` + 3. Add final class comment: `// end of test modifications` +- All edits computed from same file snapshot, applied atomically +- **Expected final state**: State G + three coordinated comments +- After applying the atomic edits, run `validate_script(level:"standard")` and emit a clean fragment to `reports/T-F_results.xml` with a short summary. + +### T-G. Path Normalization Test (No State Change) +**Goal**: Verify URI forms work equivalently on modified file +**Actions**: +- Make identical edit using `unity://path/Assets/Scripts/LongUnityScriptClaudeTest.cs` +- Then using `Assets/Scripts/LongUnityScriptClaudeTest.cs` +- Second should return `stale_file`, retry with updated SHA +- Verify both URI forms target same file +- **Expected final state**: State H (no content change, just path testing) +- Emit `reports/T-G_results.xml` showing evidence of stale SHA handling. + +### T-H. Validation on Modified File (No State Change) +**Goal**: Ensure validation works correctly on heavily modified file +**Actions**: +- Run `validate_script(level:"standard")` on current state +- Verify no structural errors despite extensive modifications +- **Expected final state**: State H (validation only, no edits) +- Emit `reports/T-H_results.xml` confirming validation OK. + +### T-I. Failure Surface Testing (No State Change) +**Goal**: Test error handling on real modified file +**Actions**: +- Attempt overlapping edits (should fail cleanly) +- Attempt edit with stale SHA (should fail cleanly) +- Verify error responses are informative +- **Expected final state**: State H (failed operations don't modify file) +- Emit `reports/T-I_results.xml` capturing error evidence; file must contain one ``. + +### T-J. Idempotency on Modified File (Additive State I) +**Goal**: Verify operations behave predictably when repeated +**Actions**: +- **Insert (structured)**: `mcp__unity__script_apply_edits` with: + `{"op":"anchor_insert","anchor":"// Tail test C","position":"after","text":"\n // idempotency test marker"}` +- **Insert again** (same op) → expect `no_op: true`. +- **Remove (structured)**: `{"op":"regex_replace","pattern":"(?m)^\\s*// idempotency test marker\\r?\\n?","text":""}` +- **Remove again** (same `regex_replace`) → expect `no_op: true`. +- `mcp__unity__validate_script(level:"standard")` +- Perform a final console scan for errors/exceptions (errors only, up to 3); include "no errors" if none +- **IMMEDIATELY** write clean XML fragment to `reports/T-J_results.xml` with evidence of both `no_op: true` outcomes and the console result. The `` must start with `T-J`. +- **Expected final state**: State H + verified idempotent behavior + +--- + +## Dynamic Targeting Examples + +**Instead of hardcoded coordinates:** +```json +{"startLine": 31, "startCol": 26, "endLine": 31, "endCol": 58} +``` + +**Use content-aware targeting:** +```json +# Find current method location +find_in_file(pattern: "public bool HasTarget\\(\\)") +# Then compute edit ranges from found position +``` + +**Method targeting by signature:** +```json +{"op": "replace_method", "className": "LongUnityScriptClaudeTest", "methodName": "HasTarget"} +``` + +**Anchor-based insertions:** +```json +{"op": "anchor_insert", "anchor": "private void Update\\(\\)", "position": "before", "text": "// comment"} +``` + +--- + +## State Verification Patterns + +**After each test:** +1. Verify expected content exists: `find_in_file` for key markers +2. Check structural integrity: `validate_script(level:"standard")` +3. Update SHA tracking for next test's preconditions +4. Emit a per‑test fragment to `reports/_results.xml` immediately. If the test failed, still write a single `` with a `` and evidence in `system-out`. +5. Log cumulative changes in test evidence (keep concise per Transcript Minimization Rules; never paste raw tool JSON) + +**Error Recovery:** +- If test fails, log current state but continue (don't restore) +- Next test adapts to actual current state, not expected state +- Demonstrates resilience of operations on varied file conditions + +--- + +## Benefits of Additive Design + +1. **Realistic Workflows**: Tests mirror actual development patterns +2. **Robust Operations**: Proves edits work on evolving files, not just pristine baselines +3. **Composability Validation**: Shows operations coordinate well together +4. **Simplified Infrastructure**: No restore scripts or snapshots needed +5. **Better Failure Analysis**: Failures don't cascade - each test adapts to current reality +6. **State Evolution Testing**: Validates SDK handles cumulative file modifications correctly + +This additive approach produces a more realistic and maintainable test suite that better represents actual SDK usage patterns. + +--- + +BAN ON EXTRA TOOLS AND DIRS +- Do not use any tools outside `AllowedTools`. Do not create directories; assume `reports/` exists. + +--- + +## XML Fragment Templates (T-F .. T-J) + +Use these skeletons verbatim as a starting point. Replace the bracketed placeholders with your evidence. Ensure each file contains exactly one `` element and that the `name` begins with the exact test id. + +```xml + + + +``` + +```xml + + + +``` + +```xml + + + +``` + +```xml + + + +``` + +```xml + + + diff --git a/.claude/settings.json b/.claude/settings.json new file mode 100644 index 00000000..bd3d3363 --- /dev/null +++ b/.claude/settings.json @@ -0,0 +1,18 @@ +{ + "permissions": { + "allow": [ + "mcp__unity", + "Edit(reports/**)", + "MultiEdit(reports/**)" + ], + "deny": [ + "Bash", + "WebFetch", + "WebSearch", + "Task", + "TodoWrite", + "NotebookEdit", + "NotebookRead" + ] + } +} diff --git a/.github/workflows/claude-nl-suite-mini.yml b/.github/workflows/claude-nl-suite-mini.yml deleted file mode 100644 index 272e04d6..00000000 --- a/.github/workflows/claude-nl-suite-mini.yml +++ /dev/null @@ -1,356 +0,0 @@ -name: Claude Mini NL Test Suite (Unity live) - -on: - workflow_dispatch: {} - -permissions: - contents: read - checks: write - -concurrency: - group: ${{ github.workflow }}-${{ github.ref }} - cancel-in-progress: true - -env: - UNITY_VERSION: 2021.3.45f1 - UNITY_IMAGE: unityci/editor:ubuntu-2021.3.45f1-linux-il2cpp-3 - UNITY_CACHE_ROOT: /home/runner/work/_temp/_github_home - -jobs: - nl-suite: - if: github.event_name == 'workflow_dispatch' - runs-on: ubuntu-latest - timeout-minutes: 60 - env: - JUNIT_OUT: reports/junit-nl-suite.xml - MD_OUT: reports/junit-nl-suite.md - - steps: - # ---------- Detect secrets ---------- - - name: Detect secrets (outputs) - id: detect - env: - UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }} - UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} - UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} - UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} - ANTHROPIC_API_KEY: ${{ secrets.ANTHROPIC_API_KEY }} - run: | - set -e - if [ -n "$ANTHROPIC_API_KEY" ]; then echo "anthropic_ok=true" >> "$GITHUB_OUTPUT"; else echo "anthropic_ok=false" >> "$GITHUB_OUTPUT"; fi - if [ -n "$UNITY_LICENSE" ] || { [ -n "$UNITY_EMAIL" ] && [ -n "$UNITY_PASSWORD" ]; } || [ -n "$UNITY_SERIAL" ]; then - echo "unity_ok=true" >> "$GITHUB_OUTPUT" - else - echo "unity_ok=false" >> "$GITHUB_OUTPUT" - fi - - - uses: actions/checkout@v4 - with: - fetch-depth: 0 - - # ---------- Python env for MCP server (uv) ---------- - - uses: astral-sh/setup-uv@v4 - with: - python-version: '3.11' - - - name: Install MCP server - run: | - set -eux - uv venv - echo "VIRTUAL_ENV=$GITHUB_WORKSPACE/.venv" >> "$GITHUB_ENV" - echo "$GITHUB_WORKSPACE/.venv/bin" >> "$GITHUB_PATH" - if [ -f UnityMcpBridge/UnityMcpServer~/src/pyproject.toml ]; then - uv pip install -e UnityMcpBridge/UnityMcpServer~/src - elif [ -f UnityMcpBridge/UnityMcpServer~/src/requirements.txt ]; then - uv pip install -r UnityMcpBridge/UnityMcpServer~/src/requirements.txt - elif [ -f UnityMcpBridge/UnityMcpServer~/pyproject.toml ]; then - uv pip install -e UnityMcpBridge/UnityMcpServer~/ - elif [ -f UnityMcpBridge/UnityMcpServer~/requirements.txt ]; then - uv pip install -r UnityMcpBridge/UnityMcpServer~/requirements.txt - else - echo "No MCP Python deps found (skipping)" - fi - - # ---------- License prime on host (handles ULF or EBL) ---------- - - name: Prime Unity license on host (GameCI) - if: steps.detect.outputs.unity_ok == 'true' - uses: game-ci/unity-test-runner@v4 - env: - UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }} - UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} - UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} - UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} - with: - projectPath: TestProjects/UnityMCPTests - testMode: EditMode - customParameters: -runTests -testFilter __NoSuchTest__ -batchmode -nographics - unityVersion: ${{ env.UNITY_VERSION }} - - # (Optional) Show where the license actually got written - - name: Inspect GameCI license caches (host) - if: steps.detect.outputs.unity_ok == 'true' - run: | - set -eux - find "${{ env.UNITY_CACHE_ROOT }}" -maxdepth 4 \( -path "*/.cache" -prune -o -type f \( -name '*.ulf' -o -name 'user.json' \) -print \) 2>/dev/null || true - - # ---------- Clean any stale MCP status from previous runs ---------- - - name: Clean old MCP status - run: | - set -eux - mkdir -p "$HOME/.unity-mcp" - rm -f "$HOME/.unity-mcp"/unity-mcp-status-*.json || true - - # ---------- Start headless Unity that stays up (bridge enabled) ---------- - - name: Start Unity (persistent bridge) - if: steps.detect.outputs.unity_ok == 'true' - env: - UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} - UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} - UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} - run: | - set -eu - if [ ! -d "${{ github.workspace }}/TestProjects/UnityMCPTests/ProjectSettings" ]; then - echo "Unity project not found; failing fast." - exit 1 - fi - mkdir -p "$HOME/.unity-mcp" - MANUAL_ARG=() - if [ -f "${UNITY_CACHE_ROOT}/.local/share/unity3d/Unity_lic.ulf" ]; then - MANUAL_ARG=(-manualLicenseFile /root/.local/share/unity3d/Unity_lic.ulf) - fi - EBL_ARGS=() - [ -n "${UNITY_SERIAL:-}" ] && EBL_ARGS+=(-serial "$UNITY_SERIAL") - [ -n "${UNITY_EMAIL:-}" ] && EBL_ARGS+=(-username "$UNITY_EMAIL") - [ -n "${UNITY_PASSWORD:-}" ] && EBL_ARGS+=(-password "$UNITY_PASSWORD") - docker rm -f unity-mcp >/dev/null 2>&1 || true - docker run -d --name unity-mcp --network host \ - -e HOME=/root \ - -e UNITY_MCP_ALLOW_BATCH=1 -e UNITY_MCP_STATUS_DIR=/root/.unity-mcp \ - -e UNITY_MCP_BIND_HOST=127.0.0.1 \ - -v "${{ github.workspace }}:/workspace" -w /workspace \ - -v "${{ env.UNITY_CACHE_ROOT }}:/root" \ - -v "$HOME/.unity-mcp:/root/.unity-mcp" \ - ${{ env.UNITY_IMAGE }} /opt/unity/Editor/Unity -batchmode -nographics -logFile - \ - -stackTraceLogType Full \ - -projectPath /workspace/TestProjects/UnityMCPTests \ - "${MANUAL_ARG[@]}" \ - "${EBL_ARGS[@]}" \ - -executeMethod MCPForUnity.Editor.MCPForUnityBridge.StartAutoConnect - - # ---------- Wait for Unity bridge (fail fast if not running/ready) ---------- - - name: Wait for Unity bridge (robust) - if: steps.detect.outputs.unity_ok == 'true' - run: | - set -euo pipefail - if ! docker ps --format '{{.Names}}' | grep -qx 'unity-mcp'; then - echo "Unity container failed to start"; docker ps -a || true; exit 1 - fi - docker logs -f unity-mcp 2>&1 | sed -E 's/((serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' & LOGPID=$! - deadline=$((SECONDS+420)); READY=0 - try_connect_host() { - P="$1" - timeout 1 bash -lc "exec 3<>/dev/tcp/127.0.0.1/$P; head -c 8 <&3 >/dev/null" && return 0 || true - if command -v nc >/dev/null 2>&1; then nc -6 -z ::1 "$P" && return 0 || true; fi - return 1 - } - - # in-container probe will try IPv4 then IPv6 via nc or /dev/tcp - - while [ $SECONDS -lt $deadline ]; do - if docker logs unity-mcp 2>&1 | grep -qE "MCP Bridge listening|Bridge ready|Server started"; then - READY=1; echo "Bridge ready (log markers)"; break - fi - PORT=$(python -c "import os,glob,json,sys,time; b=os.path.expanduser('~/.unity-mcp'); fs=sorted(glob.glob(os.path.join(b,'unity-mcp-status-*.json')), key=os.path.getmtime, reverse=True); print(next((json.load(open(f,'r',encoding='utf-8')).get('unity_port') for f in fs if time.time()-os.path.getmtime(f)<=300 and json.load(open(f,'r',encoding='utf-8')).get('unity_port')), '' ))" 2>/dev/null || true) - if [ -n "${PORT:-}" ] && { try_connect_host "$PORT" || docker exec unity-mcp bash -lc "timeout 1 bash -lc 'exec 3<>/dev/tcp/127.0.0.1/$PORT' || (command -v nc >/dev/null 2>&1 && nc -6 -z ::1 $PORT)"; }; then - READY=1; echo "Bridge ready on port $PORT"; break - fi - if docker logs unity-mcp 2>&1 | grep -qE "No valid Unity Editor license|Token not found in cache|com\.unity\.editor\.headless"; then - echo "Licensing error detected"; break - fi - sleep 2 - done - - kill $LOGPID || true - - if [ "$READY" != "1" ]; then - echo "Bridge not ready; diagnostics:" - echo "== status files =="; ls -la "$HOME/.unity-mcp" || true - echo "== status contents =="; for f in "$HOME"/.unity-mcp/unity-mcp-status-*.json; do [ -f "$f" ] && { echo "--- $f"; sed -n '1,120p' "$f"; }; done - echo "== sockets (inside container) =="; docker exec unity-mcp bash -lc 'ss -lntp || netstat -tulpen || true' - echo "== tail of Unity log ==" - docker logs --tail 200 unity-mcp | sed -E 's/((serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' || true - exit 1 - fi - - # ---------- Make MCP config available to the action ---------- - - name: Write MCP config (.claude/mcp.json) - run: | - set -eux - mkdir -p .claude - cat > .claude/mcp.json < str: - return tag.rsplit('}', 1)[-1] if '}' in tag else tag - - src = Path(os.environ.get('JUNIT_OUT', 'reports/junit-nl-suite.xml')) - out = Path('reports/junit-for-actions.xml') - out.parent.mkdir(parents=True, exist_ok=True) - - if not src.exists(): - # Try to use any existing XML as a source (e.g., claude-nl-tests.xml) - candidates = sorted(Path('reports').glob('*.xml')) - if candidates: - src = candidates[0] - else: - print("WARN: no XML source found for normalization") - - if src.exists(): - try: - root = ET.parse(src).getroot() - rtag = localname(root.tag) - if rtag == 'testsuites' and len(root) == 1 and localname(root[0].tag) == 'testsuite': - ET.ElementTree(root[0]).write(out, encoding='utf-8', xml_declaration=True) - else: - out.write_bytes(src.read_bytes()) - except Exception as e: - print("Normalization error:", e) - out.write_bytes(src.read_bytes()) - - # Always create a second copy with a junit-* name so wildcard patterns match too - if out.exists(): - Path('reports/junit-nl-suite-copy.xml').write_bytes(out.read_bytes()) - PY - - - name: "Debug: list report files" - if: always() - shell: bash - run: | - set -eux - ls -la reports || true - shopt -s nullglob - for f in reports/*.xml; do - echo "===== $f =====" - head -n 40 "$f" || true - done - - - # sanitize only the markdown (does not touch JUnit xml) - - name: Sanitize markdown (all shards) - if: always() - run: | - set -eu - python - <<'PY' - from pathlib import Path - rp=Path('reports') - rp.mkdir(parents=True, exist_ok=True) - for p in rp.glob('*.md'): - b=p.read_bytes().replace(b'\x00', b'') - s=b.decode('utf-8','replace').replace('\r\n','\n') - p.write_text(s, encoding='utf-8', newline='\n') - PY - - - name: NL/T details → Job Summary - if: always() - run: | - echo "## Unity NL/T Editing Suite — Full Coverage" >> $GITHUB_STEP_SUMMARY - python - <<'PY' >> $GITHUB_STEP_SUMMARY - from pathlib import Path - p = Path('reports/junit-nl-suite.md') if Path('reports/junit-nl-suite.md').exists() else Path('reports/claude-nl-tests.md') - if p.exists(): - text = p.read_bytes().decode('utf-8', 'replace') - MAX = 65000 - print(text[:MAX]) - if len(text) > MAX: - print("\n\n_…truncated in summary; full report is in artifacts._") - else: - print("_No markdown report found._") - PY - - - name: Fallback JUnit if missing - if: always() - run: | - set -eu - mkdir -p reports - if [ ! -f reports/junit-for-actions.xml ]; then - printf '%s\n' \ - '' \ - '' \ - ' ' \ - ' ' \ - ' ' \ - '' \ - > reports/junit-for-actions.xml - fi - - - - name: Publish JUnit reports - if: always() - uses: mikepenz/action-junit-report@v5 - with: - report_paths: 'reports/junit-for-actions.xml' - include_passed: true - detailed_summary: true - annotate_notice: true - require_tests: false - fail_on_parse_error: true - - - name: Upload artifacts - if: always() - uses: actions/upload-artifact@v4 - with: - name: claude-nl-suite-artifacts - path: reports/** - - # ---------- Always stop Unity ---------- - - name: Stop Unity - if: always() - run: | - docker logs --tail 400 unity-mcp | sed -E 's/((serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' || true - docker rm -f unity-mcp || true diff --git a/.github/workflows/claude-nl-suite.yml b/.github/workflows/claude-nl-suite.yml index 5bdc573b..539263d6 100644 --- a/.github/workflows/claude-nl-suite.yml +++ b/.github/workflows/claude-nl-suite.yml @@ -1,7 +1,6 @@ name: Claude NL/T Full Suite (Unity live) -on: - workflow_dispatch: {} +on: [workflow_dispatch] permissions: contents: read @@ -12,13 +11,10 @@ concurrency: cancel-in-progress: true env: - UNITY_VERSION: 2021.3.45f1 UNITY_IMAGE: unityci/editor:ubuntu-2021.3.45f1-linux-il2cpp-3 - UNITY_CACHE_ROOT: /home/runner/work/_temp/_github_home jobs: nl-suite: - if: github.event_name == 'workflow_dispatch' runs-on: ubuntu-latest timeout-minutes: 60 env: @@ -38,7 +34,7 @@ jobs: run: | set -e if [ -n "$ANTHROPIC_API_KEY" ]; then echo "anthropic_ok=true" >> "$GITHUB_OUTPUT"; else echo "anthropic_ok=false" >> "$GITHUB_OUTPUT"; fi - if [ -n "$UNITY_LICENSE" ] || { [ -n "$UNITY_EMAIL" ] && [ -n "$UNITY_PASSWORD" ]; } || [ -n "$UNITY_SERIAL" ]; then + if [ -n "$UNITY_LICENSE" ] || { [ -n "$UNITY_EMAIL" ] && [ -n "$UNITY_PASSWORD" ]; }; then echo "unity_ok=true" >> "$GITHUB_OUTPUT" else echo "unity_ok=false" >> "$GITHUB_OUTPUT" @@ -70,28 +66,120 @@ jobs: else echo "No MCP Python deps found (skipping)" fi - - # ---------- License prime on host (GameCI) ---------- - - name: Prime Unity license on host (GameCI) - if: steps.detect.outputs.unity_ok == 'true' - uses: game-ci/unity-test-runner@v4 + + # --- Licensing: allow both ULF and EBL when available --- + - name: Decide license sources + id: lic + shell: bash env: UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }} UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} - with: - projectPath: TestProjects/UnityMCPTests - testMode: EditMode - customParameters: -runTests -testFilter __NoSuchTest__ -batchmode -nographics - unityVersion: ${{ env.UNITY_VERSION }} - - # (Optional) Inspect license caches - - name: Inspect GameCI license caches (host) - if: steps.detect.outputs.unity_ok == 'true' run: | - set -eux - find "${{ env.UNITY_CACHE_ROOT }}" -maxdepth 4 \( -path "*/.cache" -prune -o -type f \( -name '*.ulf' -o -name 'user.json' \) -print \) 2>/dev/null || true + set -eu + use_ulf=false; use_ebl=false + [[ -n "${UNITY_LICENSE:-}" ]] && use_ulf=true + [[ -n "${UNITY_EMAIL:-}" && -n "${UNITY_PASSWORD:-}" ]] && use_ebl=true + echo "use_ulf=$use_ulf" >> "$GITHUB_OUTPUT" + echo "use_ebl=$use_ebl" >> "$GITHUB_OUTPUT" + echo "has_serial=$([[ -n "${UNITY_SERIAL:-}" ]] && echo true || echo false)" >> "$GITHUB_OUTPUT" + + - name: Stage Unity .ulf license (from secret) + if: steps.lic.outputs.use_ulf == 'true' + id: ulf + env: + UNITY_LICENSE: ${{ secrets.UNITY_LICENSE }} + shell: bash + run: | + set -eu + mkdir -p "$RUNNER_TEMP/unity-license-ulf" "$RUNNER_TEMP/unity-local/Unity" + f="$RUNNER_TEMP/unity-license-ulf/Unity_lic.ulf" + if printf "%s" "$UNITY_LICENSE" | base64 -d - >/dev/null 2>&1; then + printf "%s" "$UNITY_LICENSE" | base64 -d - > "$f" + else + printf "%s" "$UNITY_LICENSE" > "$f" + fi + chmod 600 "$f" || true + # If someone pasted an entitlement XML into UNITY_LICENSE by mistake, re-home it: + if head -c 100 "$f" | grep -qi '<\?xml'; then + mkdir -p "$RUNNER_TEMP/unity-config/Unity/licenses" + mv "$f" "$RUNNER_TEMP/unity-config/Unity/licenses/UnityEntitlementLicense.xml" + echo "ok=false" >> "$GITHUB_OUTPUT" + elif grep -qi '' "$f"; then + # provide it in the standard local-share path too + cp -f "$f" "$RUNNER_TEMP/unity-local/Unity/Unity_lic.ulf" + echo "ok=true" >> "$GITHUB_OUTPUT" + else + echo "ok=false" >> "$GITHUB_OUTPUT" + fi + + # --- Activate via EBL inside the same Unity image (writes host-side entitlement) --- + - name: Activate Unity (EBL via container - host-mount) + if: steps.lic.outputs.use_ebl == 'true' + shell: bash + env: + UNITY_IMAGE: ${{ env.UNITY_IMAGE }} + UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} + UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} + UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} + run: | + set -euxo pipefail + # host dirs to receive the full Unity config and local-share + mkdir -p "$RUNNER_TEMP/unity-config" "$RUNNER_TEMP/unity-local" + + # Try Pro first if serial is present, otherwise named-user EBL. + docker run --rm --network host \ + -e HOME=/root \ + -e UNITY_EMAIL -e UNITY_PASSWORD -e UNITY_SERIAL \ + -v "$RUNNER_TEMP/unity-config:/root/.config/unity3d" \ + -v "$RUNNER_TEMP/unity-local:/root/.local/share/unity3d" \ + "$UNITY_IMAGE" bash -lc ' + set -euxo pipefail + if [[ -n "${UNITY_SERIAL:-}" ]]; then + /opt/unity/Editor/Unity -batchmode -nographics -logFile - \ + -username "$UNITY_EMAIL" -password "$UNITY_PASSWORD" -serial "$UNITY_SERIAL" -quit || true + else + /opt/unity/Editor/Unity -batchmode -nographics -logFile - \ + -username "$UNITY_EMAIL" -password "$UNITY_PASSWORD" -quit || true + fi + ls -la /root/.config/unity3d/Unity/licenses || true + ' + + # Verify entitlement written to host mount; allow ULF-only runs to proceed + if ! find "$RUNNER_TEMP/unity-config" -type f -iname "*.xml" | grep -q .; then + if [[ "${{ steps.ulf.outputs.ok }}" == "true" ]]; then + echo "EBL entitlement not found; proceeding with ULF-only (ok=true)." + else + echo "No entitlement produced and no valid ULF; cannot continue." >&2 + exit 1 + fi + fi + + # EBL entitlement is already written directly to $RUNNER_TEMP/unity-config by the activation step + + # ---------- Warm up project (import Library once) ---------- + - name: Warm up project (import Library once) + if: steps.lic.outputs.use_ulf == 'true' || steps.lic.outputs.use_ebl == 'true' + shell: bash + env: + UNITY_IMAGE: ${{ env.UNITY_IMAGE }} + ULF_OK: ${{ steps.ulf.outputs.ok }} + run: | + set -euxo pipefail + manual_args=() + if [[ "${ULF_OK:-false}" == "true" ]]; then + manual_args=(-manualLicenseFile "/root/.local/share/unity3d/Unity/Unity_lic.ulf") + fi + docker run --rm --network host \ + -e HOME=/root \ + -v "${{ github.workspace }}:/workspace" -w /workspace \ + -v "$RUNNER_TEMP/unity-config:/root/.config/unity3d" \ + -v "$RUNNER_TEMP/unity-local:/root/.local/share/unity3d" \ + "$UNITY_IMAGE" /opt/unity/Editor/Unity -batchmode -nographics -logFile - \ + -projectPath /workspace/TestProjects/UnityMCPTests \ + "${manual_args[@]}" \ + -quit # ---------- Clean old MCP status ---------- - name: Clean old MCP status @@ -102,80 +190,90 @@ jobs: # ---------- Start headless Unity (persistent bridge) ---------- - name: Start Unity (persistent bridge) - if: steps.detect.outputs.unity_ok == 'true' + if: steps.lic.outputs.use_ulf == 'true' || steps.lic.outputs.use_ebl == 'true' + shell: bash env: - UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} - UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} - UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} + UNITY_IMAGE: ${{ env.UNITY_IMAGE }} + ULF_OK: ${{ steps.ulf.outputs.ok }} run: | - set -eu - if [ ! -d "${{ github.workspace }}/TestProjects/UnityMCPTests/ProjectSettings" ]; then - echo "Unity project not found; failing fast." - exit 1 + set -euxo pipefail + manual_args=() + if [[ "${ULF_OK:-false}" == "true" ]]; then + manual_args=(-manualLicenseFile "/root/.local/share/unity3d/Unity/Unity_lic.ulf") fi - mkdir -p "$HOME/.unity-mcp" - MANUAL_ARG=() - if [ -f "${UNITY_CACHE_ROOT}/.local/share/unity3d/Unity_lic.ulf" ]; then - MANUAL_ARG=(-manualLicenseFile /root/.local/share/unity3d/Unity_lic.ulf) - fi - EBL_ARGS=() - [ -n "${UNITY_SERIAL:-}" ] && EBL_ARGS+=(-serial "$UNITY_SERIAL") - [ -n "${UNITY_EMAIL:-}" ] && EBL_ARGS+=(-username "$UNITY_EMAIL") - [ -n "${UNITY_PASSWORD:-}" ] && EBL_ARGS+=(-password "$UNITY_PASSWORD") + + mkdir -p "$RUNNER_TEMP/unity-status" docker rm -f unity-mcp >/dev/null 2>&1 || true docker run -d --name unity-mcp --network host \ -e HOME=/root \ - -e UNITY_MCP_ALLOW_BATCH=1 -e UNITY_MCP_STATUS_DIR=/root/.unity-mcp \ + -e UNITY_MCP_ALLOW_BATCH=1 \ + -e UNITY_MCP_STATUS_DIR=/root/.unity-mcp \ -e UNITY_MCP_BIND_HOST=127.0.0.1 \ -v "${{ github.workspace }}:/workspace" -w /workspace \ - -v "${{ env.UNITY_CACHE_ROOT }}:/root" \ - -v "$HOME/.unity-mcp:/root/.unity-mcp" \ - ${{ env.UNITY_IMAGE }} /opt/unity/Editor/Unity -batchmode -nographics -logFile - \ + -v "$RUNNER_TEMP/unity-status:/root/.unity-mcp" \ + -v "$RUNNER_TEMP/unity-config:/root/.config/unity3d:ro" \ + -v "$RUNNER_TEMP/unity-local:/root/.local/share/unity3d:ro" \ + "$UNITY_IMAGE" /opt/unity/Editor/Unity -batchmode -nographics -logFile - \ -stackTraceLogType Full \ -projectPath /workspace/TestProjects/UnityMCPTests \ - "${MANUAL_ARG[@]}" \ - "${EBL_ARGS[@]}" \ + "${manual_args[@]}" \ -executeMethod MCPForUnity.Editor.MCPForUnityBridge.StartAutoConnect # ---------- Wait for Unity bridge ---------- - name: Wait for Unity bridge (robust) - if: steps.detect.outputs.unity_ok == 'true' + shell: bash run: | set -euo pipefail - if ! docker ps --format '{{.Names}}' | grep -qx 'unity-mcp'; then - echo "Unity container failed to start"; docker ps -a || true; exit 1 - fi - docker logs -f unity-mcp 2>&1 | sed -E 's/((serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' & LOGPID=$! - deadline=$((SECONDS+420)); READY=0 - try_connect_host() { - P="$1" - timeout 1 bash -lc "exec 3<>/dev/tcp/127.0.0.1/$P; head -c 8 <&3 >/dev/null" && return 0 || true - if command -v nc >/dev/null 2>&1; then nc -6 -z ::1 "$P" && return 0 || true; fi - return 1 - } + deadline=$((SECONDS+900)) # 15 min max + fatal_after=$((SECONDS+120)) # give licensing 2 min to settle + + # Fail fast only if container actually died + st="$(docker inspect -f '{{.State.Status}} {{.State.ExitCode}}' unity-mcp 2>/dev/null || true)" + case "$st" in exited*|dead*) docker logs unity-mcp --tail 200 | sed -E 's/((email|serial|license|password|token)[^[:space:]]*)/[REDACTED]/Ig'; exit 1;; esac + + # Patterns + ok_pat='(Bridge|MCP(For)?Unity|AutoConnect).*(listening|ready|started|port|bound)' + # Only truly fatal signals; allow transient "Licensing::..." chatter + license_fatal='No valid Unity|License is not active|cannot load ULF|Signature element not found|Token not found|0 entitlement|Entitlement.*(failed|denied)|License (activation|return|renewal).*(failed|expired|denied)' + while [ $SECONDS -lt $deadline ]; do - if docker logs unity-mcp 2>&1 | grep -qE "MCP Bridge listening|Bridge ready|Server started"; then - READY=1; echo "Bridge ready (log markers)"; break + logs="$(docker logs unity-mcp 2>&1 || true)" + + # 1) Primary: status JSON exposes TCP port + port="$(jq -r '.unity_port // empty' "$RUNNER_TEMP"/unity-status/unity-mcp-status-*.json 2>/dev/null | head -n1 || true)" + if [[ -n "${port:-}" ]] && timeout 1 bash -lc "exec 3<>/dev/tcp/127.0.0.1/$port"; then + echo "Bridge ready on port $port" + exit 0 fi - PORT=$(python3 -c "import os,glob,json,sys,time; b=os.path.expanduser('~/.unity-mcp'); fs=sorted(glob.glob(os.path.join(b,'unity-mcp-status-*.json')), key=os.path.getmtime, reverse=True); print(next((json.load(open(f,'r',encoding='utf-8')).get('unity_port') for f in fs if time.time()-os.path.getmtime(f)<=300 and json.load(open(f,'r',encoding='utf-8')).get('unity_port')), '' ))" 2>/dev/null || true) - if [ -n "${PORT:-}" ] && { try_connect_host "$PORT" || docker exec unity-mcp bash -lc "timeout 1 bash -lc 'exec 3<>/dev/tcp/127.0.0.1/$PORT' || (command -v nc >/dev/null 2>&1 && nc -6 -z ::1 $PORT)"; }; then - READY=1; echo "Bridge ready on port $PORT"; break + + # 2) Secondary: log markers + if echo "$logs" | grep -qiE "$ok_pat"; then + echo "Bridge ready (log markers)" + exit 0 + fi + + # Only treat license failures as fatal *after* warm-up + if [ $SECONDS -ge $fatal_after ] && echo "$logs" | grep -qiE "$license_fatal"; then + echo "::error::Fatal licensing signal detected after warm-up" + echo "$logs" | tail -n 200 | sed -E 's/((email|serial|license|password|token)[^[:space:]]*)/[REDACTED]/Ig' + exit 1 fi - if docker logs unity-mcp 2>&1 | grep -qE "No valid Unity Editor license|Token not found in cache|com\.unity\.editor\.headless"; then - echo "Licensing error detected"; break + + # If the container dies mid-wait, bail + st="$(docker inspect -f '{{.State.Status}}' unity-mcp 2>/dev/null || true)" + if [[ "$st" != "running" ]]; then + echo "::error::Unity container exited during wait"; docker logs unity-mcp --tail 200 | sed -E 's/((email|serial|license|password|token)[^[:space:]]*)/[REDACTED]/Ig' + exit 1 fi + sleep 2 done - kill $LOGPID || true - if [ "$READY" != "1" ]; then - echo "Bridge not ready; diagnostics:" - echo "== status files =="; ls -la "$HOME/.unity-mcp" || true - echo "== status contents =="; for f in "$HOME"/.unity-mcp/unity-mcp-status-*.json; do [ -f "$f" ] && { echo "--- $f"; sed -n '1,120p' "$f"; }; done - echo "== sockets (inside container) =="; docker exec unity-mcp bash -lc 'ss -lntp || netstat -tulpen || true' - echo "== tail of Unity log ==" - docker logs --tail 200 unity-mcp | sed -E 's/((serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' || true - exit 1 - fi + + echo "::error::Bridge not ready before deadline" + docker logs unity-mcp --tail 200 | sed -E 's/((email|serial|license|password|token)[^[:space:]]*)/[REDACTED]/Ig' + exit 1 + + # (moved) — return license after Unity is stopped # ---------- MCP client config ---------- - name: Write MCP config (.claude/mcp.json) @@ -192,19 +290,46 @@ jobs: "env": { "PYTHONUNBUFFERED": "1", "MCP_LOG_LEVEL": "debug", - "UNITY_PROJECT_ROOT": "$GITHUB_WORKSPACE/TestProjects/UnityMCPTests" + "UNITY_PROJECT_ROOT": "$GITHUB_WORKSPACE/TestProjects/UnityMCPTests", + "UNITY_MCP_STATUS_DIR": "$RUNNER_TEMP/unity-status", + "UNITY_MCP_HOST": "127.0.0.1" } } } } JSON + + - name: Pin Claude tool permissions (.claude/settings.json) + run: | + set -eux + mkdir -p .claude + cat > .claude/settings.json <<'JSON' + { + "permissions": { + "allow": [ + "mcp__unity", + "Edit(reports/**)" + ], + "deny": [ + "Bash", + "MultiEdit", + "WebFetch", + "WebSearch", + "Task", + "TodoWrite", + "NotebookEdit", + "NotebookRead" + ] + } + } + JSON # ---------- Reports & helper ---------- - name: Prepare reports and dirs run: | set -eux rm -f reports/*.xml reports/*.md || true - mkdir -p reports reports/_snapshots scripts + mkdir -p reports reports/_snapshots reports/_staging - name: Create report skeletons run: | @@ -218,80 +343,300 @@ jobs: XML printf '# Unity NL/T Editing Suite Test Results\n\n' > "$MD_OUT" - - - name: Write safe revert helper (scripts/nlt-revert.sh) - shell: bash + + - name: Verify Unity bridge status/port run: | - set -eux - cat > scripts/nlt-revert.sh <<'BASH' - #!/usr/bin/env bash - set -euo pipefail - sub="${1:-}"; target_rel="${2:-}"; snap="${3:-}" - WS="${GITHUB_WORKSPACE:-$PWD}" - ROOT="$WS/TestProjects/UnityMCPTests" - t_abs="$(realpath -m "$WS/$target_rel")" - s_abs="$(realpath -m "$WS/$snap")" - if [[ "$t_abs" != "$ROOT/Assets/"* ]]; then - echo "refuse: target outside allowed scope: $t_abs" >&2; exit 2 + set -euxo pipefail + ls -la "$RUNNER_TEMP/unity-status" || true + jq -r . "$RUNNER_TEMP"/unity-status/unity-mcp-status-*.json | sed -n '1,80p' || true + + shopt -s nullglob + status_files=("$RUNNER_TEMP"/unity-status/unity-mcp-status-*.json) + if ((${#status_files[@]})); then + port="$(grep -hEo '"unity_port"[[:space:]]*:[[:space:]]*[0-9]+' "${status_files[@]}" \ + | sed -E 's/.*: *([0-9]+).*/\1/' | head -n1 || true)" + else + port="" fi - mkdir -p "$(dirname "$s_abs")" - case "$sub" in - snapshot) - cp -f "$t_abs" "$s_abs" - sha=$(sha256sum "$s_abs" | awk '{print $1}') - echo "snapshot_sha=$sha" - ;; - restore) - if [[ ! -f "$s_abs" ]]; then echo "snapshot missing: $s_abs" >&2; exit 3; fi - cp -f "$s_abs" "$t_abs" - touch "$t_abs" - sha=$(sha256sum "$t_abs" | awk '{print $1}') - echo "restored_sha=$sha" - ;; - *) - echo "usage: $0 snapshot|restore " >&2; exit 1 - ;; - esac - BASH - chmod +x scripts/nlt-revert.sh - - # ---------- Snapshot baseline (pre-agent) ---------- - - name: Snapshot baseline (pre-agent) - if: steps.detect.outputs.anthropic_ok == 'true' && steps.detect.outputs.unity_ok == 'true' - shell: bash - run: | - set -euo pipefail - TARGET="TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs" - SNAP="reports/_snapshots/LongUnityScriptClaudeTest.cs.baseline" - scripts/nlt-revert.sh snapshot "$TARGET" "$SNAP" + + echo "unity_port=$port" + if [[ -n "$port" ]]; then + timeout 1 bash -lc "exec 3<>/dev/tcp/127.0.0.1/$port" && echo "TCP OK" + fi + + # (removed) Revert helper and baseline snapshot are no longer used - # ---------- Run suite ---------- - - name: Run Claude NL suite (single pass) + # ---------- Run suite in two passes ---------- + - name: Run Claude NL pass uses: anthropics/claude-code-base-action@beta if: steps.detect.outputs.anthropic_ok == 'true' && steps.detect.outputs.unity_ok == 'true' continue-on-error: true with: use_node_cache: false - prompt_file: .claude/prompts/nl-unity-suite-full-additive.md + prompt_file: .claude/prompts/nl-unity-suite-nl.md mcp_config: .claude/mcp.json - allowed_tools: >- - Write, - Bash(scripts/nlt-revert.sh:*), - mcp__unity__manage_editor, - mcp__unity__list_resources, - mcp__unity__read_resource, - mcp__unity__apply_text_edits, - mcp__unity__script_apply_edits, - mcp__unity__validate_script, - mcp__unity__find_in_file, - mcp__unity__read_console, - mcp__unity__get_sha - disallowed_tools: TodoWrite,Task - model: claude-3-7-sonnet-latest + settings: .claude/settings.json + allowed_tools: "mcp__unity,Edit(reports/**),MultiEdit(reports/**)" + disallowed_tools: "Bash,WebFetch,WebSearch,Task,TodoWrite,NotebookEdit,NotebookRead" + model: claude-3-7-sonnet-20250219 + append_system_prompt: | + You are running the NL pass only. + - Emit exactly NL-0, NL-1, NL-2, NL-3, NL-4. + - Write each to reports/${ID}_results.xml. + - Prefer a single MultiEdit(reports/**) batch. Do not emit any T-* tests. + - Stop after NL-4_results.xml is written. timeout_minutes: "30" anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} - + + + - name: Run Claude T pass A-J + uses: anthropics/claude-code-base-action@beta + if: steps.detect.outputs.anthropic_ok == 'true' && steps.detect.outputs.unity_ok == 'true' + continue-on-error: true + with: + use_node_cache: false + prompt_file: .claude/prompts/nl-unity-suite-t.md + mcp_config: .claude/mcp.json + settings: .claude/settings.json + allowed_tools: "mcp__unity,Edit(reports/**),MultiEdit(reports/**)" + disallowed_tools: "Bash,WebFetch,WebSearch,Task,TodoWrite,NotebookEdit,NotebookRead" + model: claude-3-5-haiku-20241022 + append_system_prompt: | + You are running the T pass (A–J) only. + Output requirements: + - Emit exactly 10 test fragments: T-A, T-B, T-C, T-D, T-E, T-F, T-G, T-H, T-I, T-J. + - Write each fragment to reports/${ID}_results.xml (e.g., T-A_results.xml). + - Prefer a single MultiEdit(reports/**) call that writes all ten files in one batch. + - If MultiEdit is not used, emit individual writes for any missing IDs until all ten exist. + - Do not emit any NL-* fragments. + Stop condition: + - After T-J_results.xml is written, stop. + timeout_minutes: "30" + anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} + + # (moved) Assert T coverage after staged fragments are promoted + + - name: Check T coverage incomplete (pre-retry) + id: t_cov + if: always() + shell: bash + run: | + set -euo pipefail + missing=() + for id in T-A T-B T-C T-D T-E T-F T-G T-H T-I T-J; do + if [[ ! -s "reports/${id}_results.xml" && ! -s "reports/_staging/${id}_results.xml" ]]; then + missing+=("$id") + fi + done + echo "missing=${#missing[@]}" >> "$GITHUB_OUTPUT" + if (( ${#missing[@]} )); then + echo "list=${missing[*]}" >> "$GITHUB_OUTPUT" + fi + + - name: Retry T pass (Sonnet) if incomplete + if: steps.t_cov.outputs.missing != '0' + uses: anthropics/claude-code-base-action@beta + with: + use_node_cache: false + prompt_file: .claude/prompts/nl-unity-suite-t.md + mcp_config: .claude/mcp.json + settings: .claude/settings.json + allowed_tools: "mcp__unity,Edit(reports/**),MultiEdit(reports/**)" + disallowed_tools: "Bash,MultiEdit(/!(reports/**)),WebFetch,WebSearch,Task,TodoWrite,NotebookEdit,NotebookRead" + model: claude-3-7-sonnet-20250219 + fallback_model: claude-3-5-haiku-20241022 + append_system_prompt: | + You are running the T pass only. + Output requirements: + - Emit exactly 10 test fragments: T-A, T-B, T-C, T-D, T-E, T-F, T-G, T-H, T-I, T-J. + - Write each fragment to reports/${ID}_results.xml (e.g., T-A_results.xml). + - Prefer a single MultiEdit(reports/**) call that writes all ten files in one batch. + - If MultiEdit is not used, emit individual writes for any missing IDs until all ten exist. + - Do not emit any NL-* fragments. + Stop condition: + - After T-J_results.xml is written, stop. + timeout_minutes: "30" + anthropic_api_key: ${{ secrets.ANTHROPIC_API_KEY }} + + - name: Re-assert T coverage (post-retry) + if: always() + shell: bash + run: | + set -euo pipefail + missing=() + for id in T-A T-B T-C T-D T-E T-F T-G T-H T-I T-J; do + [[ -s "reports/${id}_results.xml" ]] || missing+=("$id") + done + if (( ${#missing[@]} )); then + echo "::error::Still missing T fragments: ${missing[*]}" + exit 1 + fi + + # (kept) Finalize staged report fragments (promote to reports/) + + # (removed duplicate) Finalize staged report fragments + + - name: Assert T coverage (after promotion) + if: always() + shell: bash + run: | + set -euo pipefail + missing=() + for id in T-A T-B T-C T-D T-E T-F T-G T-H T-I T-J; do + if [[ ! -s "reports/${id}_results.xml" ]]; then + # Accept staged fragment as present + [[ -s "reports/_staging/${id}_results.xml" ]] || missing+=("$id") + fi + done + if (( ${#missing[@]} )); then + echo "::error::Missing T fragments: ${missing[*]}" + exit 1 + fi + + - name: Canonicalize testcase names (NL/T prefixes) + if: always() + shell: bash + run: | + python3 - <<'PY' + from pathlib import Path + import xml.etree.ElementTree as ET, re, os + + RULES = [ + ("NL-0", r"\b(NL-0|Baseline|State\s*Capture)\b"), + ("NL-1", r"\b(NL-1|Core\s*Method)\b"), + ("NL-2", r"\b(NL-2|Anchor|Build\s*marker)\b"), + ("NL-3", r"\b(NL-3|End[-\s]*of[-\s]*Class\s*Content|Tail\s*test\s*[ABC])\b"), + ("NL-4", r"\b(NL-4|Console|Unity\s*console)\b"), + ("T-A", r"\b(T-?A|Temporary\s*Helper)\b"), + ("T-B", r"\b(T-?B|Method\s*Body\s*Interior)\b"), + ("T-C", r"\b(T-?C|Different\s*Method\s*Interior|ApplyBlend)\b"), + ("T-D", r"\b(T-?D|End[-\s]*of[-\s]*Class\s*Helper|TestHelper)\b"), + ("T-E", r"\b(T-?E|Method\s*Evolution|Counter|IncrementCounter)\b"), + ("T-F", r"\b(T-?F|Atomic\s*Multi[-\s]*Edit)\b"), + ("T-G", r"\b(T-?G|Path\s*Normalization)\b"), + ("T-H", r"\b(T-?H|Validation\s*on\s*Modified)\b"), + ("T-I", r"\b(T-?I|Failure\s*Surface)\b"), + ("T-J", r"\b(T-?J|Idempotenc(y|e))\b"), + ] + + def canon_name(name: str) -> str: + n = name or "" + for tid, pat in RULES: + if re.search(pat, n, flags=re.I): + # If it already starts with the correct format, leave it alone + if re.match(rf'^\s*{re.escape(tid)}\s*[—–-]', n, flags=re.I): + return n.strip() + # If it has a different separator, extract title and reformat + title_match = re.search(rf'{re.escape(tid)}\s*[:.\-–—]\s*(.+)', n, flags=re.I) + if title_match: + title = title_match.group(1).strip() + return f"{tid} — {title}" + # Otherwise, just return the canonical ID + return tid + return n + + def id_from_filename(p: Path): + n = p.name + m = re.match(r'NL(\d+)_results\.xml$', n, re.I) + if m: + return f"NL-{int(m.group(1))}" + m = re.match(r'T([A-J])_results\.xml$', n, re.I) + if m: + return f"T-{m.group(1).upper()}" + return None + + frags = list(sorted(Path("reports").glob("*_results.xml"))) + for frag in frags: + try: + tree = ET.parse(frag); root = tree.getroot() + except Exception: + continue + if root.tag != "testcase": + continue + file_id = id_from_filename(frag) + old = root.get("name") or "" + # Prefer filename-derived ID; if name doesn't start with it, override + if file_id: + # Respect file's ID (prevents T-D being renamed to NL-3 by loose patterns) + title = re.sub(r'^\s*(NL-\d+|T-[A-Z])\s*[—–:\-]\s*', '', old).strip() + new = f"{file_id} — {title}" if title else file_id + else: + new = canon_name(old) + if new != old and new: + root.set("name", new) + tree.write(frag, encoding="utf-8", xml_declaration=False) + print(f'canon: {frag.name}: "{old}" -> "{new}"') + + # Note: Do not auto-relable fragments. We rely on per-test strict emission + # and the backfill step to surface missing tests explicitly. + PY + + - name: Backfill missing NL/T tests (fail placeholders) + if: always() + shell: bash + run: | + python3 - <<'PY' + from pathlib import Path + import xml.etree.ElementTree as ET + import re + + DESIRED = ["NL-0","NL-1","NL-2","NL-3","NL-4","T-A","T-B","T-C","T-D","T-E","T-F","T-G","T-H","T-I","T-J"] + seen = set() + def id_from_filename(p: Path): + n = p.name + m = re.match(r'NL(\d+)_results\.xml$', n, re.I) + if m: + return f"NL-{int(m.group(1))}" + m = re.match(r'T([A-J])_results\.xml$', n, re.I) + if m: + return f"T-{m.group(1).upper()}" + return None + + for p in Path("reports").glob("*_results.xml"): + try: + r = ET.parse(p).getroot() + except Exception: + continue + # Count by filename id primarily; fall back to testcase name if needed + fid = id_from_filename(p) + if fid in DESIRED: + seen.add(fid) + continue + if r.tag == "testcase": + name = (r.get("name") or "").strip() + for d in DESIRED: + if name.startswith(d): + seen.add(d) + break + + Path("reports").mkdir(parents=True, exist_ok=True) + for d in DESIRED: + if d in seen: + continue + frag = Path(f"reports/{d}_results.xml") + tc = ET.Element("testcase", {"classname":"UnityMCP.NL-T", "name": d}) + fail = ET.SubElement(tc, "failure", {"message":"not produced"}) + fail.text = "The agent did not emit a fragment for this test." + ET.ElementTree(tc).write(frag, encoding="utf-8", xml_declaration=False) + print(f"backfill: {d}") + PY + + - name: "Debug: list testcase names" + if: always() + run: | + python3 - <<'PY' + from pathlib import Path + import xml.etree.ElementTree as ET + for p in sorted(Path('reports').glob('*_results.xml')): + try: + r = ET.parse(p).getroot() + if r.tag == 'testcase': + print(f"{p.name}: {(r.get('name') or '').strip()}") + except Exception: + pass + PY + # ---------- Merge testcase fragments into JUnit ---------- - name: Normalize/assemble JUnit in-place (single file) if: always() @@ -301,44 +646,96 @@ jobs: from pathlib import Path import xml.etree.ElementTree as ET import re, os - def localname(tag: str) -> str: return tag.rsplit('}', 1)[-1] if '}' in tag else tag + + def localname(tag: str) -> str: + return tag.rsplit('}', 1)[-1] if '}' in tag else tag + src = Path(os.environ.get('JUNIT_OUT', 'reports/junit-nl-suite.xml')) - if not src.exists(): raise SystemExit(0) - tree = ET.parse(src); root = tree.getroot() + if not src.exists(): + raise SystemExit(0) + + tree = ET.parse(src) + root = tree.getroot() suite = root.find('./*') if localname(root.tag) == 'testsuites' else root - if suite is None: raise SystemExit(0) + if suite is None: + raise SystemExit(0) + + def id_from_filename(p: Path): + n = p.name + m = re.match(r'NL(\d+)_results\.xml$', n, re.I) + if m: + return f"NL-{int(m.group(1))}" + m = re.match(r'T([A-J])_results\.xml$', n, re.I) + if m: + return f"T-{m.group(1).upper()}" + return None + + def id_from_system_out(tc): + so = tc.find('system-out') + if so is not None and so.text: + m = re.search(r'\b(NL-\d+|T-[A-Z])\b', so.text) + if m: + return m.group(1) + return None + fragments = sorted(Path('reports').glob('*_results.xml')) added = 0 + renamed = 0 + for frag in fragments: + tcs = [] try: froot = ET.parse(frag).getroot() if localname(froot.tag) == 'testcase': - suite.append(froot); added += 1 + tcs = [froot] else: - for tc in froot.findall('.//testcase'): - suite.append(tc); added += 1 + tcs = list(froot.findall('.//testcase')) except Exception: txt = Path(frag).read_text(encoding='utf-8', errors='replace') - for m in re.findall(r'', txt, flags=re.DOTALL): - try: suite.append(ET.fromstring(m)); added += 1 - except Exception: pass + # Extract all testcase nodes from raw text + nodes = re.findall(r'', txt, flags=re.DOTALL) + for m in nodes: + try: + tcs.append(ET.fromstring(m)) + except Exception: + pass + + # Guard: keep only the first testcase from each fragment + if len(tcs) > 1: + tcs = tcs[:1] + + test_id = id_from_filename(frag) + + for tc in tcs: + current_name = tc.get('name') or '' + tid = test_id or id_from_system_out(tc) + # Enforce filename-derived ID as prefix; repair names if needed + if tid and not re.match(r'^\s*(NL-\d+|T-[A-Z])\b', current_name): + title = current_name.strip() + new_name = f'{tid} — {title}' if title else tid + tc.set('name', new_name) + elif tid and not re.match(rf'^\s*{re.escape(tid)}\b', current_name): + # Replace any wrong leading ID with the correct one + title = re.sub(r'^\s*(NL-\d+|T-[A-Z])\s*[—–:\-]\s*', '', current_name).strip() + new_name = f'{tid} — {title}' if title else tid + tc.set('name', new_name) + renamed += 1 + suite.append(tc) + added += 1 + if added: # Drop bootstrap placeholder and recompute counts - removed_bootstrap = 0 for tc in list(suite.findall('.//testcase')): - name = (tc.get('name') or '') - if name == 'NL-Suite.Bootstrap': + if (tc.get('name') or '') == 'NL-Suite.Bootstrap': suite.remove(tc) - removed_bootstrap += 1 testcases = suite.findall('.//testcase') - tests_cnt = len(testcases) failures_cnt = sum(1 for tc in testcases if (tc.find('failure') is not None or tc.find('error') is not None)) - suite.set('tests', str(tests_cnt)) + suite.set('tests', str(len(testcases))) suite.set('failures', str(failures_cnt)) - suite.set('errors', str(0)) - suite.set('skipped', str(0)) + suite.set('errors', '0') + suite.set('skipped', '0') tree.write(src, encoding='utf-8', xml_declaration=True) - print(f"Added {added} testcase fragments; removed bootstrap={removed_bootstrap}; tests={tests_cnt}; failures={failures_cnt}") + print(f"Appended {added} testcase(s); renamed {renamed} to canonical NL/T names.") PY # ---------- Markdown summary from JUnit ---------- @@ -349,14 +746,13 @@ jobs: python3 - <<'PY' import xml.etree.ElementTree as ET from pathlib import Path - import os, html + import os, html, re def localname(tag: str) -> str: return tag.rsplit('}', 1)[-1] if '}' in tag else tag src = Path(os.environ.get('JUNIT_OUT', 'reports/junit-nl-suite.xml')) md_out = Path(os.environ.get('MD_OUT', 'reports/junit-nl-suite.md')) - # Ensure destination directory exists even if earlier prep steps were skipped md_out.parent.mkdir(parents=True, exist_ok=True) if not src.exists(): @@ -368,18 +764,32 @@ jobs: suite = root.find('./*') if localname(root.tag) == 'testsuites' else root cases = [] if suite is None else list(suite.findall('.//testcase')) - total = len(cases) - failures = sum(1 for tc in cases if (tc.find('failure') is not None or tc.find('error') is not None)) - passed = total - failures + def id_from_case(tc): + n = (tc.get('name') or '') + m = re.match(r'\s*(NL-\d+|T-[A-Z])\b', n) + if m: + return m.group(1) + so = tc.find('system-out') + if so is not None and so.text: + m = re.search(r'\b(NL-\d+|T-[A-Z])\b', so.text) + if m: + return m.group(1) + return None + + id_status = {} + name_map = {} + for tc in cases: + tid = id_from_case(tc) + ok = (tc.find('failure') is None and tc.find('error') is None) + if tid and tid not in id_status: + id_status[tid] = ok + name_map[tid] = (tc.get('name') or tid) desired = ['NL-0','NL-1','NL-2','NL-3','NL-4','T-A','T-B','T-C','T-D','T-E','T-F','T-G','T-H','T-I','T-J'] - name_to_case = {(tc.get('name') or ''): tc for tc in cases} - def status_for(prefix: str): - for name, tc in name_to_case.items(): - if name.startswith(prefix): - return not ((tc.find('failure') is not None) or (tc.find('error') is not None)) - return None + total = len(cases) + failures = sum(1 for tc in cases if (tc.find('failure') is not None or tc.find('error') is not None)) + passed = total - failures lines = [] lines += [ @@ -390,52 +800,59 @@ jobs: '## Test Checklist' ] for p in desired: - st = status_for(p) + st = id_status.get(p, None) lines.append(f"- [x] {p}" if st is True else (f"- [ ] {p} (fail)" if st is False else f"- [ ] {p} (not run)")) lines.append('') - # Rich per-test system-out details lines.append('## Test Details') def order_key(n: str): - try: - if n.startswith('NL-') and n[3].isdigit(): - return (0, int(n.split('.')[0].split('-')[1])) - except Exception: - pass - if n.startswith('T-') and len(n) > 2 and n[2].isalpha(): + if n.startswith('NL-'): + try: + return (0, int(n.split('-')[1])) + except: + return (0, 999) + if n.startswith('T-') and len(n) > 2: return (1, ord(n[2])) return (2, n) MAX_CHARS = 2000 - for name in sorted(name_to_case.keys(), key=order_key): - tc = name_to_case[name] - status_badge = "PASS" if (tc.find('failure') is None and tc.find('error') is None) else "FAIL" - lines.append(f"### {name} — {status_badge}") + seen = set() + for tid in sorted(id_status.keys(), key=order_key): + seen.add(tid) + tc = next((c for c in cases if (id_from_case(c) == tid)), None) + if not tc: + continue + title = name_map.get(tid, tid) + status_badge = "PASS" if id_status[tid] else "FAIL" + lines.append(f"### {title} — {status_badge}") so = tc.find('system-out') - text = '' if so is None or so.text is None else so.text.replace('\r\n','\n') - # Unescape XML entities so code reads naturally (e.g., => instead of =>) - if text: - text = html.unescape(text) + text = '' if so is None or so.text is None else html.unescape(so.text.replace('\r\n','\n')) if text.strip(): t = text.strip() if len(t) > MAX_CHARS: t = t[:MAX_CHARS] + "\n…(truncated)" - # Use a safer fence if content contains triple backticks - fence = '```' - if '```' in t: - fence = '````' - lines.append(fence) - lines.append(t) - lines.append(fence) + fence = '```' if '```' not in t else '````' + lines += [fence, t, fence] else: lines.append('(no system-out)') node = tc.find('failure') or tc.find('error') if node is not None: msg = (node.get('message') or '').strip() body = (node.text or '').strip() - if msg: lines.append(f"- Message: {msg}") - if body: lines.append(f"- Detail: {body.splitlines()[0][:500]}") + if msg: + lines.append(f"- Message: {msg}") + if body: + lines.append(f"- Detail: {body.splitlines()[0][:500]}") + lines.append('') + + for tc in cases: + if id_from_case(tc) in seen: + continue + title = tc.get('name') or '(unnamed)' + status_badge = "PASS" if (tc.find('failure') is None and tc.find('error') is None) else "FAIL" + lines.append(f"### {title} — {status_badge}") + lines.append('(unmapped test id)') lines.append('') md_out.write_text('\n'.join(lines), encoding='utf-8') @@ -478,7 +895,7 @@ jobs: p.write_text(s, encoding='utf-8', newline='\n') PY - - name: NL/T details → Job Summary + - name: NL/T details -> Job Summary if: always() run: | echo "## Unity NL/T Editing Suite — Summary" >> $GITHUB_STEP_SUMMARY @@ -538,6 +955,15 @@ jobs: - name: Stop Unity if: always() run: | - docker logs --tail 400 unity-mcp | sed -E 's/((serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' || true + docker logs --tail 400 unity-mcp | sed -E 's/((email|serial|license|password|token)[^[:space:]]*)/[REDACTED]/ig' || true docker rm -f unity-mcp || true + + - name: Return Pro license (if used) + if: always() && steps.lic.outputs.use_ebl == 'true' && steps.lic.outputs.has_serial == 'true' + uses: game-ci/unity-return-license@v2 + continue-on-error: true + env: + UNITY_EMAIL: ${{ secrets.UNITY_EMAIL }} + UNITY_PASSWORD: ${{ secrets.UNITY_PASSWORD }} + UNITY_SERIAL: ${{ secrets.UNITY_SERIAL }} \ No newline at end of file diff --git a/README-DEV.md b/README-DEV.md index debcffc7..edf293aa 100644 --- a/README-DEV.md +++ b/README-DEV.md @@ -16,6 +16,22 @@ Quick deployment and testing tools for MCP for Unity core changes. --- +## Switching MCP package sources quickly + +Run this from the unity-mcp repo, not your game's roote directory. Use `mcp_source.py` to quickly switch between different MCP for Unity package sources: + +**Usage:** +```bash +python mcp_source.py [--manifest /path/to/manifest.json] [--repo /path/to/unity-mcp] [--choice 1|2|3] +``` + +**Options:** +- **1** Upstream main (CoplayDev/unity-mcp) +- **2** Remote current branch (origin + branch) +- **3** Local workspace (file: UnityMcpBridge) + +After switching, open Package Manager and Refresh to re-resolve packages. + ## Development Deployment Scripts These deployment scripts help you quickly test changes to MCP for Unity core code. @@ -46,6 +62,18 @@ Restores original files from backup. 2. Allows you to select which backup to restore 3. Restores both Unity Bridge and Python Server files +### `prune_tool_results.py` +Compacts large `tool_result` blobs in conversation JSON into concise one-line summaries. + +**Usage:** +```bash +python3 prune_tool_results.py < reports/claude-execution-output.json > reports/claude-execution-output.pruned.json +``` + +The script reads a conversation from `stdin` and writes the pruned version to `stdout`, making logs much easier to inspect or archive. + +These defaults dramatically cut token usage without affecting essential information. + ## Finding Unity Package Cache Path Unity stores Git packages under a version-or-hash folder. Expect something like: @@ -68,22 +96,23 @@ Note: In recent builds, the Python server sources are also bundled inside the pa ## CI Test Workflow (GitHub Actions) -We provide a CI job to run a Natural Language Editing mini-suite against the Unity test project. It spins up a headless Unity container and connects via the MCP bridge. +We provide a CI job to run a Natural Language Editing suite against the Unity test project. It spins up a headless Unity container and connects via the MCP bridge. To run from your fork, you need the following GitHub "secrets": an `ANTHROPIC_API_KEY` and Unity credentials (usually `UNITY_EMAIL` + `UNITY_PASSWORD` or `UNITY_LICENSE` / `UNITY_SERIAL`.) These are redacted in logs so never visible. -- Trigger: Workflow dispatch (`Claude NL suite (Unity live)`). -- Image: `UNITY_IMAGE` (UnityCI) pulled by tag; the job resolves a digest at runtime. Logs are sanitized. -- Reports: JUnit at `reports/junit-nl-suite.xml`, Markdown at `reports/junit-nl-suite.md`. -- Publishing: JUnit is normalized to `reports/junit-for-actions.xml` and published; artifacts upload all files under `reports/`. +***To run it*** + - Trigger: In GitHun "Actions" for the repo, trigger `workflow dispatch` (`Claude NL/T Full Suite (Unity live)`). + - Image: `UNITY_IMAGE` (UnityCI) pulled by tag; the job resolves a digest at runtime. Logs are sanitized. + - Execution: single pass with immediate per‑test fragment emissions (strict single `` per file). A placeholder guard fails fast if any fragment is a bare ID. Staging (`reports/_staging`) is promoted to `reports/` to reduce partial writes. + - Reports: JUnit at `reports/junit-nl-suite.xml`, Markdown at `reports/junit-nl-suite.md`. + - Publishing: JUnit is normalized to `reports/junit-for-actions.xml` and published; artifacts upload all files under `reports/`. ### Test target script - The repo includes a long, standalone C# script used to exercise larger edits and windows: - `TestProjects/UnityMCPTests/Assets/Scripts/LongUnityScriptClaudeTest.cs` Use this file locally and in CI to validate multi-edit batches, anchor inserts, and windowed reads on a sizable script. -### Add a new NL test -- Edit `.claude/prompts/nl-unity-claude-tests-mini.md` (or `nl-unity-suite-full.md` for the larger suite). -- Follow the conventions: single `` root, one `` per sub-test, end system-out with `VERDICT: PASS|FAIL`. -- Keep edits minimal and reversible; include evidence windows and compact diffs. +### Adjust tests / prompts +- Edit `.claude/prompts/nl-unity-suite-t.md` to modify the NL/T steps. Follow the conventions: emit one XML fragment per test under `reports/_results.xml`, each containing exactly one `` with a `name` that begins with the test ID. No prologue/epilogue or code fences. +- Keep edits minimal and reversible; include concise evidence. ### Run the suite 1) Push your branch, then manually run the workflow from the Actions tab. @@ -95,7 +124,6 @@ We provide a CI job to run a Natural Language Editing mini-suite against the Uni - Check: “JUnit Test Report” on the PR/commit. - Artifacts: `claude-nl-suite-artifacts` includes XML and MD. - ### MCP Connection Debugging - *Enable debug logs* in the Unity MCP window (inside the Editor) to view connection status, auto-setup results, and MCP client paths. It shows: - bridge startup/port, client connections, strict framing negotiation, and parsed frames @@ -109,24 +137,6 @@ We provide a CI job to run a Natural Language Editing mini-suite against the Uni 4. **Iterate** - repeat steps 1-3 as needed 5. **Restore** original files when done using `restore-dev.bat` - -## Switching MCP package sources quickly - -Use `mcp_source.py` to quickly switch between different MCP for Unity package sources: - -**Usage:** -```bash -python mcp_source.py [--manifest /path/to/manifest.json] [--repo /path/to/unity-mcp] [--choice 1|2|3] -``` - -**Options:** -- **1** Upstream main (CoplayDev/unity-mcp) -- **2** Remote current branch (origin + branch) -- **3** Local workspace (file: UnityMcpBridge) - -After switching, open Package Manager and Refresh to re-resolve packages. - - ## Troubleshooting ### "Path not found" errors running the .bat file diff --git a/UnityMcpBridge/Editor/Tools/ManageScript.cs b/UnityMcpBridge/Editor/Tools/ManageScript.cs index 7079d7a9..0ed65afa 100644 --- a/UnityMcpBridge/Editor/Tools/ManageScript.cs +++ b/UnityMcpBridge/Editor/Tools/ManageScript.cs @@ -1347,6 +1347,10 @@ private static object EditScript( appliedCount = replacements.Count; } + // Guard against structural imbalance before validation + if (!CheckBalancedDelimiters(working, out int lineBal, out char expectedBal)) + return Response.Error("unbalanced_braces", new { status = "unbalanced_braces", line = lineBal, expected = expectedBal.ToString() }); + // No-op guard for structured edits: if text unchanged, return explicit no-op if (string.Equals(working, original, StringComparison.Ordinal)) { diff --git a/UnityMcpBridge/UnityMcpServer~/src/tools/manage_script.py b/UnityMcpBridge/UnityMcpServer~/src/tools/manage_script.py index d4e9ad43..a77b6928 100644 --- a/UnityMcpBridge/UnityMcpServer~/src/tools/manage_script.py +++ b/UnityMcpBridge/UnityMcpServer~/src/tools/manage_script.py @@ -416,6 +416,11 @@ def validate_script( "level": level, } resp = send_command_with_retry("manage_script", params) + if isinstance(resp, dict) and resp.get("success"): + diags = resp.get("data", {}).get("diagnostics", []) or [] + warnings = sum(d.get("severity", "").lower() == "warning" for d in diags) + errors = sum(d.get("severity", "").lower() in ("error", "fatal") for d in diags) + return {"success": True, "data": {"warnings": warnings, "errors": errors}} return resp if isinstance(resp, dict) else {"success": False, "message": str(resp)} @mcp.tool(description=( @@ -588,6 +593,15 @@ def get_sha(ctx: Context, uri: str) -> Dict[str, Any]: name, directory = _split_uri(uri) params = {"action": "get_sha", "name": name, "path": directory} resp = send_command_with_retry("manage_script", params) + if isinstance(resp, dict) and resp.get("success"): + data = resp.get("data", {}) + return { + "success": True, + "data": { + "sha256": data.get("sha256"), + "lengthBytes": data.get("lengthBytes"), + }, + } return resp if isinstance(resp, dict) else {"success": False, "message": str(resp)} except Exception as e: return {"success": False, "message": f"get_sha error: {e}"} diff --git a/UnityMcpBridge/UnityMcpServer~/src/tools/read_console.py b/UnityMcpBridge/UnityMcpServer~/src/tools/read_console.py index 098951c6..aae0e49f 100644 --- a/UnityMcpBridge/UnityMcpServer~/src/tools/read_console.py +++ b/UnityMcpBridge/UnityMcpServer~/src/tools/read_console.py @@ -40,11 +40,16 @@ def read_console( # Get the connection instance bridge = get_unity_connection() - # Set defaults if values are None + # Set defaults if values are None (conservative but useful for CI) action = action if action is not None else 'get' - types = types if types is not None else ['error', 'warning', 'log'] - format = format if format is not None else 'detailed' + types = types if types is not None else ['error'] + # Normalize types if passed as a single string + if isinstance(types, str): + types = [types] + format = format if format is not None else 'json' include_stacktrace = include_stacktrace if include_stacktrace is not None else True + # Default count to a higher value unless explicitly provided + count = 50 if count is None else count # Normalize action if it's a string if isinstance(action, str): @@ -68,6 +73,25 @@ def read_console( if 'count' not in params_dict: params_dict['count'] = None - # Use centralized retry helper + # Use centralized retry helper (tolerate legacy list payloads from some agents) resp = send_command_with_retry("read_console", params_dict) - return resp if isinstance(resp, dict) else {"success": False, "message": str(resp)} \ No newline at end of file + if isinstance(resp, dict) and resp.get("success") and not include_stacktrace: + data = resp.get("data", {}) or {} + lines = data.get("lines") + if lines is None: + # Some handlers return the raw list under data + lines = data if isinstance(data, list) else [] + + def _entry(x: Any) -> Dict[str, Any]: + if isinstance(x, dict): + return { + "level": x.get("level") or x.get("type"), + "message": x.get("message") or x.get("text"), + } + if isinstance(x, (list, tuple)) and len(x) >= 2: + return {"level": x[0], "message": x[1]} + return {"level": None, "message": str(x)} + + trimmed = [_entry(l) for l in (lines or [])] + return {"success": True, "data": {"lines": trimmed}} + return resp if isinstance(resp, dict) else {"success": False, "message": str(resp)} diff --git a/UnityMcpBridge/UnityMcpServer~/src/tools/resource_tools.py b/UnityMcpBridge/UnityMcpServer~/src/tools/resource_tools.py index 23f72ac3..909cb3c1 100644 --- a/UnityMcpBridge/UnityMcpServer~/src/tools/resource_tools.py +++ b/UnityMcpBridge/UnityMcpServer~/src/tools/resource_tools.py @@ -183,10 +183,12 @@ async def read_resource( tail_lines: int | None = None, project_root: str | None = None, request: str | None = None, + include_text: bool = False, ) -> Dict[str, Any]: """ Reads a resource by unity://path/... URI with optional slicing. - One of line window (start_line/line_count) or head_bytes can be used to limit size. + By default only the SHA-256 hash and byte length are returned; set + ``include_text`` or provide window arguments to receive text. """ try: # Serve the canonical spec directly when requested (allow bare or with scheme) @@ -291,25 +293,43 @@ async def read_resource( start_line = max(1, hit_line - half) line_count = window - # Mutually exclusive windowing options precedence: - # 1) head_bytes, 2) tail_lines, 3) start_line+line_count, else full text - if head_bytes and head_bytes > 0: - raw = p.read_bytes()[: head_bytes] - text = raw.decode("utf-8", errors="replace") - else: - text = p.read_text(encoding="utf-8") - if tail_lines is not None and tail_lines > 0: - lines = text.splitlines() - n = max(0, tail_lines) - text = "\n".join(lines[-n:]) - elif start_line is not None and line_count is not None and line_count >= 0: - lines = text.splitlines() - s = max(0, start_line - 1) - e = min(len(lines), s + line_count) - text = "\n".join(lines[s:e]) + raw = p.read_bytes() + sha = hashlib.sha256(raw).hexdigest() + length = len(raw) - sha = hashlib.sha256(text.encode("utf-8")).hexdigest() - return {"success": True, "data": {"text": text, "metadata": {"sha256": sha}}} + want_text = ( + bool(include_text) + or (head_bytes is not None and head_bytes >= 0) + or (tail_lines is not None and tail_lines > 0) + or (start_line is not None and line_count is not None) + ) + if want_text: + text: str + if head_bytes is not None and head_bytes >= 0: + text = raw[: head_bytes].decode("utf-8", errors="replace") + else: + text = raw.decode("utf-8", errors="replace") + if tail_lines is not None and tail_lines > 0: + lines = text.splitlines() + n = max(0, tail_lines) + text = "\n".join(lines[-n:]) + elif ( + start_line is not None + and line_count is not None + and line_count >= 0 + ): + lines = text.splitlines() + s = max(0, start_line - 1) + e = min(len(lines), s + line_count) + text = "\n".join(lines[s:e]) + return { + "success": True, + "data": {"text": text, "metadata": {"sha256": sha}}, + } + return { + "success": True, + "data": {"metadata": {"sha256": sha, "lengthBytes": length}}, + } except Exception as e: return {"success": False, "error": str(e)} @@ -320,10 +340,10 @@ async def find_in_file( ctx: Context | None = None, ignore_case: bool | None = True, project_root: str | None = None, - max_results: int | None = 200, + max_results: int | None = 1, ) -> Dict[str, Any]: """ - Searches a file with a regex pattern and returns line numbers and excerpts. + Searches a file with a regex pattern and returns match positions only. - uri: unity://path/Assets/... or file path form supported by read_resource - pattern: regular expression (Python re) - ignore_case: case-insensitive by default @@ -345,8 +365,17 @@ async def find_in_file( results = [] lines = text.splitlines() for i, line in enumerate(lines, start=1): - if rx.search(line): - results.append({"line": i, "text": line}) + m = rx.search(line) + if m: + start_col, end_col = m.span() + results.append( + { + "startLine": i, + "startCol": start_col + 1, + "endLine": i, + "endCol": end_col + 1, + } + ) if max_results and len(results) >= max_results: break diff --git a/prune_tool_results.py b/prune_tool_results.py new file mode 100755 index 00000000..b5a53d30 --- /dev/null +++ b/prune_tool_results.py @@ -0,0 +1,58 @@ +#!/usr/bin/env python3 +import sys, json, re + +def summarize(txt): + try: + obj = json.loads(txt) + except Exception: + return f"tool_result: {len(txt)} bytes" + data = obj.get("data", {}) or {} + msg = obj.get("message") or obj.get("status") or "" + # Common tool shapes + if "sha256" in str(data): + ln = data.get("lengthBytes") or data.get("length") or "" + return f"len={ln}".strip() + if "diagnostics" in data: + diags = data["diagnostics"] or [] + w = sum(d.get("severity","" ).lower()=="warning" for d in diags) + e = sum(d.get("severity","" ).lower() in ("error","fatal") for d in diags) + ok = "OK" if not e else "FAIL" + return f"validate: {ok} (warnings={w}, errors={e})" + if "matches" in data: + m = data["matches"] or [] + if m: + first = m[0] + return f"find_in_file: {len(m)} match(es) first@{first.get('line',0)}:{first.get('col',0)}" + return "find_in_file: 0 matches" + if "lines" in data: # console + lines = data["lines"] or [] + lvls = {"info":0,"warning":0,"error":0} + for L in lines: + lvls[L.get("level","" ).lower()] = lvls.get(L.get("level","" ).lower(),0)+1 + return f"console: {len(lines)} lines (info={lvls.get('info',0)},warn={lvls.get('warning',0)},err={lvls.get('error',0)})" + # Fallback: short status + return (msg or "tool_result")[:80] + +def prune_message(msg): + if "content" not in msg: return msg + newc=[] + for c in msg["content"]: + if c.get("type")=="tool_result" and c.get("content"): + out=[] + for chunk in c["content"]: + if chunk.get("type")=="text": + out.append({"type":"text","text":summarize(chunk.get("text","" ))}) + newc.append({"type":"tool_result","tool_use_id":c.get("tool_use_id"),"content":out}) + else: + newc.append(c) + msg["content"]=newc + return msg + +def main(): + convo=json.load(sys.stdin) + if isinstance(convo, dict) and "messages" in convo: + convo["messages"]=[prune_message(m) for m in convo["messages"]] + elif isinstance(convo, list): + convo=[prune_message(m) for m in convo] + json.dump(convo, sys.stdout, ensure_ascii=False) +main() diff --git a/scripts/validate-nlt-coverage.sh b/scripts/validate-nlt-coverage.sh new file mode 100755 index 00000000..814046dc --- /dev/null +++ b/scripts/validate-nlt-coverage.sh @@ -0,0 +1,12 @@ +#!/usr/bin/env bash +set -euo pipefail +cd "$(git rev-parse --show-toplevel)" +missing=() +for id in NL-0 NL-1 NL-2 NL-3 NL-4 T-A T-B T-C T-D T-E T-F T-G T-H T-I T-J; do + [[ -s "reports/${id}_results.xml" ]] || missing+=("$id") +done +if (( ${#missing[@]} )); then + echo "Missing fragments: ${missing[*]}" + exit 2 +fi +echo "All NL/T fragments present." diff --git a/tests/test_find_in_file_minimal.py b/tests/test_find_in_file_minimal.py new file mode 100644 index 00000000..91e61ad3 --- /dev/null +++ b/tests/test_find_in_file_minimal.py @@ -0,0 +1,45 @@ +import sys +import pathlib +import importlib.util +import types +import asyncio +import pytest + +ROOT = pathlib.Path(__file__).resolve().parents[1] +SRC = ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src" +sys.path.insert(0, str(SRC)) + +from tools.resource_tools import register_resource_tools # type: ignore + +class DummyMCP: + def __init__(self): + self.tools = {} + + def tool(self, *args, **kwargs): + def deco(fn): + self.tools[fn.__name__] = fn + return fn + return deco + +@pytest.fixture() +def resource_tools(): + mcp = DummyMCP() + register_resource_tools(mcp) + return mcp.tools + +def test_find_in_file_returns_positions(resource_tools, tmp_path): + proj = tmp_path + assets = proj / "Assets" + assets.mkdir() + f = assets / "A.txt" + f.write_text("hello world", encoding="utf-8") + find_in_file = resource_tools["find_in_file"] + loop = asyncio.new_event_loop() + try: + resp = loop.run_until_complete( + find_in_file(uri="unity://path/Assets/A.txt", pattern="world", ctx=None, project_root=str(proj)) + ) + finally: + loop.close() + assert resp["success"] is True + assert resp["data"]["matches"] == [{"startLine": 1, "startCol": 7, "endLine": 1, "endCol": 12}] diff --git a/tests/test_get_sha.py b/tests/test_get_sha.py index cb58ce29..42bebaba 100644 --- a/tests/test_get_sha.py +++ b/tests/test_get_sha.py @@ -71,4 +71,5 @@ def fake_send(cmd, params): assert captured["params"]["name"] == "A" assert captured["params"]["path"].endswith("Assets/Scripts") assert resp["success"] is True + assert resp["data"] == {"sha256": "abc", "lengthBytes": 1} diff --git a/tests/test_read_console_truncate.py b/tests/test_read_console_truncate.py new file mode 100644 index 00000000..b2eafd29 --- /dev/null +++ b/tests/test_read_console_truncate.py @@ -0,0 +1,92 @@ +import sys +import pathlib +import importlib.util +import types + +ROOT = pathlib.Path(__file__).resolve().parents[1] +SRC = ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src" +sys.path.insert(0, str(SRC)) + +# stub mcp.server.fastmcp +mcp_pkg = types.ModuleType("mcp") +server_pkg = types.ModuleType("mcp.server") +fastmcp_pkg = types.ModuleType("mcp.server.fastmcp") + +class _Dummy: + pass + +fastmcp_pkg.FastMCP = _Dummy +fastmcp_pkg.Context = _Dummy +server_pkg.fastmcp = fastmcp_pkg +mcp_pkg.server = server_pkg +sys.modules.setdefault("mcp", mcp_pkg) +sys.modules.setdefault("mcp.server", server_pkg) +sys.modules.setdefault("mcp.server.fastmcp", fastmcp_pkg) + +def _load_module(path: pathlib.Path, name: str): + spec = importlib.util.spec_from_file_location(name, path) + mod = importlib.util.module_from_spec(spec) + spec.loader.exec_module(mod) + return mod + +read_console_mod = _load_module(SRC / "tools" / "read_console.py", "read_console_mod") + +class DummyMCP: + def __init__(self): + self.tools = {} + + def tool(self, *args, **kwargs): + def deco(fn): + self.tools[fn.__name__] = fn + return fn + return deco + +def setup_tools(): + mcp = DummyMCP() + read_console_mod.register_read_console_tools(mcp) + return mcp.tools + +def test_read_console_full_default(monkeypatch): + tools = setup_tools() + read_console = tools["read_console"] + + captured = {} + + def fake_send(cmd, params): + captured["params"] = params + return { + "success": True, + "data": {"lines": [{"level": "error", "message": "oops", "stacktrace": "trace", "time": "t"}]}, + } + + monkeypatch.setattr(read_console_mod, "send_command_with_retry", fake_send) + monkeypatch.setattr(read_console_mod, "get_unity_connection", lambda: object()) + + resp = read_console(ctx=None, count=10) + assert resp == { + "success": True, + "data": {"lines": [{"level": "error", "message": "oops", "stacktrace": "trace", "time": "t"}]}, + } + assert captured["params"]["count"] == 10 + assert captured["params"]["includeStacktrace"] is True + + +def test_read_console_truncated(monkeypatch): + tools = setup_tools() + read_console = tools["read_console"] + + captured = {} + + def fake_send(cmd, params): + captured["params"] = params + return { + "success": True, + "data": {"lines": [{"level": "error", "message": "oops", "stacktrace": "trace"}]}, + } + + monkeypatch.setattr(read_console_mod, "send_command_with_retry", fake_send) + monkeypatch.setattr(read_console_mod, "get_unity_connection", lambda: object()) + + resp = read_console(ctx=None, count=10, include_stacktrace=False) + assert resp == {"success": True, "data": {"lines": [{"level": "error", "message": "oops"}]}} + assert captured["params"]["includeStacktrace"] is False diff --git a/tests/test_read_resource_minimal.py b/tests/test_read_resource_minimal.py new file mode 100644 index 00000000..90d2a59b --- /dev/null +++ b/tests/test_read_resource_minimal.py @@ -0,0 +1,70 @@ +import sys +import pathlib +import asyncio +import types +import pytest + +ROOT = pathlib.Path(__file__).resolve().parents[1] +SRC = ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src" +sys.path.insert(0, str(SRC)) + +# Stub mcp.server.fastmcp to satisfy imports without full package +mcp_pkg = types.ModuleType("mcp") +server_pkg = types.ModuleType("mcp.server") +fastmcp_pkg = types.ModuleType("mcp.server.fastmcp") + +class _Dummy: + pass + +fastmcp_pkg.FastMCP = _Dummy +fastmcp_pkg.Context = _Dummy +server_pkg.fastmcp = fastmcp_pkg +mcp_pkg.server = server_pkg +sys.modules.setdefault("mcp", mcp_pkg) +sys.modules.setdefault("mcp.server", server_pkg) +sys.modules.setdefault("mcp.server.fastmcp", fastmcp_pkg) + +from tools.resource_tools import register_resource_tools # type: ignore + + +class DummyMCP: + def __init__(self): + self.tools = {} + + def tool(self, *args, **kwargs): + def deco(fn): + self.tools[fn.__name__] = fn + return fn + return deco + + +@pytest.fixture() +def resource_tools(): + mcp = DummyMCP() + register_resource_tools(mcp) + return mcp.tools + + +def test_read_resource_minimal_metadata_only(resource_tools, tmp_path): + proj = tmp_path + assets = proj / "Assets" + assets.mkdir() + f = assets / "A.txt" + content = "hello world" + f.write_text(content, encoding="utf-8") + + read_resource = resource_tools["read_resource"] + loop = asyncio.new_event_loop() + try: + resp = loop.run_until_complete( + read_resource(uri="unity://path/Assets/A.txt", ctx=None, project_root=str(proj)) + ) + finally: + loop.close() + + assert resp["success"] is True + data = resp["data"] + assert "text" not in data + meta = data["metadata"] + assert "sha256" in meta and len(meta["sha256"]) == 64 + assert meta["lengthBytes"] == len(content.encode("utf-8")) diff --git a/tests/test_validate_script_summary.py b/tests/test_validate_script_summary.py new file mode 100644 index 00000000..86a8c057 --- /dev/null +++ b/tests/test_validate_script_summary.py @@ -0,0 +1,68 @@ +import sys +import pathlib +import importlib.util +import types + +ROOT = pathlib.Path(__file__).resolve().parents[1] +SRC = ROOT / "UnityMcpBridge" / "UnityMcpServer~" / "src" +sys.path.insert(0, str(SRC)) + +# stub mcp.server.fastmcp similar to test_get_sha +mcp_pkg = types.ModuleType("mcp") +server_pkg = types.ModuleType("mcp.server") +fastmcp_pkg = types.ModuleType("mcp.server.fastmcp") + +class _Dummy: + pass + +fastmcp_pkg.FastMCP = _Dummy +fastmcp_pkg.Context = _Dummy +server_pkg.fastmcp = fastmcp_pkg +mcp_pkg.server = server_pkg +sys.modules.setdefault("mcp", mcp_pkg) +sys.modules.setdefault("mcp.server", server_pkg) +sys.modules.setdefault("mcp.server.fastmcp", fastmcp_pkg) + +def _load_module(path: pathlib.Path, name: str): + spec = importlib.util.spec_from_file_location(name, path) + mod = importlib.util.module_from_spec(spec) + spec.loader.exec_module(mod) + return mod + +manage_script = _load_module(SRC / "tools" / "manage_script.py", "manage_script_mod") + +class DummyMCP: + def __init__(self): + self.tools = {} + + def tool(self, *args, **kwargs): + def deco(fn): + self.tools[fn.__name__] = fn + return fn + return deco + +def setup_tools(): + mcp = DummyMCP() + manage_script.register_manage_script_tools(mcp) + return mcp.tools + +def test_validate_script_returns_counts(monkeypatch): + tools = setup_tools() + validate_script = tools["validate_script"] + + def fake_send(cmd, params): + return { + "success": True, + "data": { + "diagnostics": [ + {"severity": "warning"}, + {"severity": "error"}, + {"severity": "fatal"}, + ] + }, + } + + monkeypatch.setattr(manage_script, "send_command_with_retry", fake_send) + + resp = validate_script(None, uri="unity://path/Assets/Scripts/A.cs") + assert resp == {"success": True, "data": {"warnings": 1, "errors": 2}}