Context
Phase 2 of the platform rollout moves MAPROOM and TRAILHEAD out of azurelocal-S2DCartographer and into platform/testing/ as the canonical testing framework for all AzureLocal repos.
Today both frameworks are implicitly S2D-shaped (cluster fabric, storage pools, IIC canon for hyperconverged layouts). Once they're centrally managed, they need intelligence to serve consumers whose concerns aren't S2D at all.
Why this is high priority
- Phase 2 locks the public surface of
AzureLocal.Maproom and the TRAILHEAD template shape. If we ship v0.2.0 with S2D-specific primitives baked in, every consumer after the first two (S2DCartographer, ranger) bends the framework to their repo or skips it entirely — same drift problem ADR-0002 just solved for standards.
- Phase 3 reusable-workflow rollout is downstream of this: if repos can't express their own test intents in MAPROOM vocabulary, the shared `run-tests.yml` can't actually run them.
- Resolving this now (during Phase 2 design) is O(think); resolving it after v0.2.0 ships is O(break consumers).
Questions to work through
-
Classification — how do we classify a testing tool? Proposed axes, need a decision:
- Scope: infra fabric / platform feature / workload / user-journey / compliance / performance.
- Target: cluster-level / node-level / module-level / repo-level / org-level.
- Authority: unit (code owns it) / contract (platform owns the contract, repo owns the fixture) / canonical (platform owns both).
- Lifecycle phase: pre-deploy / deploy / post-deploy / drift-audit / incident-response.
-
What can each tool legitimately test? Today's implicit answers, that we need to make explicit:
- MAPROOM: fixtures + assertions about a cluster shape. What's the shape's vocabulary? Does it extend to non-S2D shapes (AVD host pools, FSLogix profile containers, VM-conversion inventories, Nutanix source manifests)?
- TRAILHEAD: scripted walkthroughs / scenario runs. What counts as a trailhead — only guided demos, or also migration rehearsals and drift remediation runbooks?
- IIC canon: today it's `iic-org`, `iic-cluster-01`, `iic-networks`. Is `iic-*` the right container for AVD tenancy, FSLogix profile maps, Nutanix source fleets, or do those need sibling canons?
-
Brainstorm additional toolsets based on what the 13 sibling repos actually do. Seed list — none of these exist yet, question is which deserve to become first-class platform testing tools vs. stay repo-local:
| Candidate |
Motivation |
Likely consumer repos |
| COMPASS — compliance/policy assertion harness |
Azure Policy, CIS, STIG checks against a live cluster, reported against the IIC canon |
platform, avd, sofs-fslogix, nutanix-migration |
| LEDGER — migration-inventory differ |
Before/after inventory diffs for VM conversion and Nutanix cutover; asserts no workloads/disks/NICs dropped |
vm-conversion-toolkit, nutanix-migration |
| PULSE — synthetic-workload load harness |
Standardized load profiles (IOPS mix, RDP session counts, file-share patterns) emitted against a cluster, correlated to MAPROOM expected capacity |
loadtools, sofs-fslogix, avd |
| STORYBOARD — AVD/user-journey scenario runner |
Session-broker-aware journey scripts (logon, app launch, profile mount, logoff) with pass/fail gates |
avd, copilot, sofs-fslogix |
| MUSTER — repo conformance checker |
Asserts a repo conforms to platform standards (files present, CI wired, STANDARDS.md stub in place, CODEOWNERS synced) — the Phase 4 drift audit's actual engine |
platform (as runner), all siblings (as subjects) |
| OUTPOST — demo-env provisioner |
Wraps MAPROOM fixtures into a reproducible lab deploy — lets training/demo repos stand up a canonical cluster |
training, demo-repository, copilot |
| SURVEYOR / RANGER / S2D integrations |
These repos already exist as consumers — decide which ones contribute testing primitives back up into platform vs. only consume |
surveyor, ranger, S2DCartographer |
For each: should it ship as part of `AzureLocal.Maproom` v1, a sibling module (`AzureLocal.Compass`, `AzureLocal.Pulse`, …), or stay deferred?
-
Repo survey — walk each sibling repo and catalogue what it tests today and what it wishes it could assert:
- azurelocal-avd, azurelocal-sofs-fslogix, azurelocal-loadtools, azurelocal-vm-conversion-toolkit, azurelocal-toolkit, azurelocal-copilot, azurelocal-training, azurelocal-nutanix-migration, azurelocal-ranger, azurelocal-S2DCartographer, azurelocal-surveyor.
- Output: a table of (repo → current test surface → unmet need → proposed toolset).
Deliverables
Blocks / blocked-by
- Blocks Phase 2 (platform-epic-issue-body.md §Phase 2) — MAPROOM framework extraction shouldn't start until the classification rubric is agreed, or we lock in S2D-shaped assumptions.
- Does not block Phase 1 (standards consolidation) — that work proceeds in parallel.
Out of scope for this issue
- Implementing any of the candidate toolsets (each gets its own issue/ADR once classified).
- Renaming existing MAPROOM/TRAILHEAD concepts — naming discussion belongs in the classification ADR, not here.
Context
Phase 2 of the platform rollout moves MAPROOM and TRAILHEAD out of
azurelocal-S2DCartographerand intoplatform/testing/as the canonical testing framework for all AzureLocal repos.Today both frameworks are implicitly S2D-shaped (cluster fabric, storage pools, IIC canon for hyperconverged layouts). Once they're centrally managed, they need intelligence to serve consumers whose concerns aren't S2D at all.
Why this is high priority
AzureLocal.Maproomand the TRAILHEAD template shape. If we ship v0.2.0 with S2D-specific primitives baked in, every consumer after the first two (S2DCartographer,ranger) bends the framework to their repo or skips it entirely — same drift problem ADR-0002 just solved for standards.Questions to work through
Classification — how do we classify a testing tool? Proposed axes, need a decision:
What can each tool legitimately test? Today's implicit answers, that we need to make explicit:
Brainstorm additional toolsets based on what the 13 sibling repos actually do. Seed list — none of these exist yet, question is which deserve to become first-class platform testing tools vs. stay repo-local:
For each: should it ship as part of `AzureLocal.Maproom` v1, a sibling module (`AzureLocal.Compass`, `AzureLocal.Pulse`, …), or stay deferred?
Repo survey — walk each sibling repo and catalogue what it tests today and what it wishes it could assert:
Deliverables
Blocks / blocked-by
Out of scope for this issue