Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
53 commits
Select commit Hold shift + click to select a range
0aa2dfc
Initial plan
Copilot Jul 4, 2025
4d3f4ff
Add BeforeAll and AfterAll job support for Test-ModuleLocal workflows
Copilot Jul 4, 2025
9eeb8c7
Add additional test BeforeAll/AfterAll scripts for comprehensive testing
Copilot Jul 4, 2025
2a30035
Add documentation for BeforeAll/AfterAll setup and teardown scripts
Copilot Jul 4, 2025
2027a0b
Rework BeforeAll/AfterAll to use PowerShell-based steps
Copilot Jul 6, 2025
92d0958
Update documentation for PowerShell-based BeforeAll/AfterAll implemen…
Copilot Jul 6, 2025
fdfea9e
Merge branch 'main' into copilot/fix-169
MariusStorhaug Sep 11, 2025
7b69fe7
Implement separate BeforeAll/AfterAll jobs with conditional execution
Copilot Sep 11, 2025
87ac2eb
Update documentation to reflect separate BeforeAll/AfterAll job archi…
Copilot Sep 11, 2025
f6fef54
Move BeforeAll/AfterAll jobs into Test-ModuleLocal workflow and remov…
Copilot Oct 1, 2025
3e25e4c
Move matrix strategy into Test-ModuleLocal workflow and update callin…
Copilot Oct 1, 2025
0ffc613
feat: Add PowerShell scripts for setup and agent context management
MariusStorhaug Oct 1, 2025
c9151a8
feat: Update constitution and implementation plan with expanded guida…
MariusStorhaug Oct 1, 2025
849ff31
chore: Update constitution version reference in plan template to v1.1.2
MariusStorhaug Oct 1, 2025
bfee666
chore: Update plan template to reference Constitution v1.3.0
MariusStorhaug Oct 1, 2025
e5077eb
feat: Update constitution to reflect new repository structure and aut…
MariusStorhaug Oct 1, 2025
205fb8b
feat: Add feature specification for local GitHub composite action to …
MariusStorhaug Oct 1, 2025
385c238
feat: Add quickstart guide for setup-test composite action integration
MariusStorhaug Oct 1, 2025
f02098b
feat: Enhance documentation for BeforeAll/AfterAll scripts to clarify…
MariusStorhaug Oct 1, 2025
7e67478
feat: Update documentation for BeforeAll/AfterAll scripts to clarify …
MariusStorhaug Oct 1, 2025
5bb6eee
feat: Enhance BeforeAll/AfterAll documentation and update task numbering
MariusStorhaug Oct 1, 2025
97d7a9f
feat: Add BeforeAll and AfterAll scripts for Environments and remove …
MariusStorhaug Oct 1, 2025
bbdf695
Dedub envs
MariusStorhaug Oct 1, 2025
1e6248b
Fix linter
MariusStorhaug Oct 1, 2025
d5fd598
Disable BIOME
MariusStorhaug Oct 1, 2025
7c35d66
Fix powershell files
MariusStorhaug Oct 1, 2025
520af2e
Linting
MariusStorhaug Oct 1, 2025
a8791ef
Fix linters
MariusStorhaug Oct 1, 2025
71d19b6
fix linting
MariusStorhaug Oct 1, 2025
af3b34c
fix: Correct terminology in documentation for clarity and consistency
MariusStorhaug Oct 1, 2025
9cf348a
Merge branch '001-building-on-this' of https://github.com/PSModule/Pr…
MariusStorhaug Oct 1, 2025
b2e6ebc
Fix MD
MariusStorhaug Oct 1, 2025
ec576d6
fix: Enhance analyze prompt with additional structure and clarity in …
MariusStorhaug Oct 1, 2025
17fa58d
fix: Improve formatting and clarity in prompt documentation
MariusStorhaug Oct 1, 2025
2f0508f
fix: Standardize behavior rules for clarification question handling
MariusStorhaug Oct 1, 2025
74dfcbc
fix: Add section headers to prompt documentation for improved clarity
MariusStorhaug Oct 1, 2025
e1a33f2
fix: Update links and section title for clarity in constitution docum…
MariusStorhaug Oct 1, 2025
c219559
fix: Improve formatting and clarity in template files by standardizin…
MariusStorhaug Oct 1, 2025
827d022
fix: Enhance documentation clarity and formatting across multiple fil…
MariusStorhaug Oct 1, 2025
9216aaf
fix: Improve documentation formatting by removing unnecessary code bl…
MariusStorhaug Oct 1, 2025
d8da82d
fix: Correct capitalization of 'GitHub' in documentation for consistency
MariusStorhaug Oct 1, 2025
4e957db
fix: Correct capitalization of 'API' in execution flow documentation …
MariusStorhaug Oct 1, 2025
879e315
Remove backup and original Draw.io files for the flow diagram to clea…
MariusStorhaug Oct 1, 2025
f938d45
Update tests/srcWithManifestTestRepo/tests/BeforeAll.ps1
MariusStorhaug Oct 1, 2025
4ef34b4
fix: Simplify BeforeAll and AfterAll script execution in workflows by…
MariusStorhaug Oct 1, 2025
b9ffbd7
feat: Add BeforeAll and AfterAll steps for module local testing in CI…
MariusStorhaug Oct 1, 2025
764667a
Merge branch '001-building-on-this' of https://github.com/PSModule/Pr…
MariusStorhaug Oct 1, 2025
552793f
fix: Restore TestName input in CI workflows for consistency
MariusStorhaug Oct 1, 2025
6e7132e
fix: Swap TestName and Name inputs in workflow for clarity and consis…
MariusStorhaug Oct 1, 2025
07457de
fix: Update TestPath and TestName inputs to be optional with default …
MariusStorhaug Oct 1, 2025
3c6e9b3
fix: Update job name in Test-ModuleLocal workflow to use RunsOn input…
MariusStorhaug Oct 1, 2025
deded99
Remove research, specification, and task documents for the local GitH…
MariusStorhaug Oct 1, 2025
48dd4d5
fix: Remove unnecessary Push-Location and Pop-Location calls in Befor…
MariusStorhaug Oct 1, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
29 changes: 29 additions & 0 deletions .github/copilot-instructions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Process-PSModule Development Guidelines

Auto-generated from all feature plans. Last updated: 2025-10-01

## Active Technologies

- PowerShell 7.4+ (GitHub Actions composite actions) + PSModule/GitHub-Script@v1, PSModule/Install-PSModuleHelpers@v1 (001-building-on-this)

## Project Structure

```plaintext
src/
tests/
```

## Commands

Add commands for PowerShell 7.4+ (GitHub Actions composite actions)

## Code Style

PowerShell 7.4+ (GitHub Actions composite actions): Follow standard conventions

## Recent Changes

- 001-building-on-this: Added PowerShell 7.4+ (GitHub Actions composite actions) + PSModule/GitHub-Script@v1, PSModule/Install-PSModuleHelpers@v1

<!-- MANUAL ADDITIONS START -->
<!-- MANUAL ADDITIONS END -->
111 changes: 111 additions & 0 deletions .github/prompts/analyze.prompt.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
---
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
---

# Analyze

The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty).

User input:

$ARGUMENTS

Goal: Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/tasks` has successfully produced a complete `tasks.md`.

STRICTLY READ-ONLY: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).

Constitution Authority: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/analyze`.

Execution steps:

1. Run `.specify/scripts/powershell/check-prerequisites.ps1 -Json -RequireTasks -IncludeTasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths:
- SPEC = FEATURE_DIR/spec.md
- PLAN = FEATURE_DIR/plan.md
- TASKS = FEATURE_DIR/tasks.md
Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command).

2. Load artifacts:
- Parse spec.md sections: Overview/Context, Functional Requirements, Non-Functional Requirements, User Stories, Edge Cases (if present).
- Parse plan.md: Architecture/stack choices, Data Model references, Phases, Technical constraints.
- Parse tasks.md: Task IDs, descriptions, phase grouping, parallel markers [P], referenced file paths.
- Load constitution `.specify/memory/constitution.md` for principle validation.

3. Build internal semantic models:
- Requirements inventory: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" -> `user-can-upload-file`).
- User story/action inventory.
- Task coverage mapping: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases).
- Constitution rule set: Extract principle names and any MUST/SHOULD normative statements.

4. Detection passes:
A. Duplication detection:
- Identify near-duplicate requirements. Mark lower-quality phrasing for consolidation.
B. Ambiguity detection:
- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria.
- Flag unresolved placeholders (TODO, TKTK, ???, <placeholder>, etc.).
C. Underspecification:
- Requirements with verbs but missing object or measurable outcome.
- User stories missing acceptance criteria alignment.
- Tasks referencing files or components not defined in spec/plan.
D. Constitution alignment:
- Any requirement or plan element conflicting with a MUST principle.
- Missing mandated sections or quality gates from constitution.
E. Coverage gaps:
- Requirements with zero associated tasks.
- Tasks with no mapped requirement/story.
- Non-functional requirements not reflected in tasks (e.g., performance, security).
F. Inconsistency:
- Terminology drift (same concept named differently across files).
- Data entities referenced in plan but absent in spec (or vice versa).
- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note).
- Conflicting requirements (e.g., one requires to use Next.js while other says to use Vue as the framework).

5. Severity assignment heuristic:
- CRITICAL: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality.
- HIGH: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion.
- MEDIUM: Terminology drift, missing non-functional task coverage, underspecified edge case.
- LOW: Style/wording improvements, minor redundancy not affecting execution order.

6. Produce a Markdown report (no file writes) with sections:

```markdown
### Specification Analysis Report

| ID | Category | Severity | Location(s) | Summary | Recommendation |
|----|----------|----------|-------------|---------|----------------|
| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
(Add one row per finding; generate stable IDs prefixed by category initial.)
```

Additional subsections:
- Coverage Summary Table:

```markdown
| Requirement Key | Has Task? | Task IDs | Notes |
|-----------------|-----------|----------|-------|
```

- Constitution Alignment Issues (if any)
- Unmapped Tasks (if any)
- Metrics:
* Total Requirements
* Total Tasks
* Coverage % (requirements with >=1 task)
* Ambiguity Count
* Duplication Count
* Critical Issues Count

7. At end of report, output a concise Next Actions block:
- If CRITICAL issues exist: Recommend resolving before `/implement`.
- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions.
- Provide explicit command suggestions: e.g., "Run /specify with refinement", "Run /plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'".

8. Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)

Behavior rules:
- NEVER modify files.
- NEVER hallucinate missing sections—if absent, report them.
- KEEP findings deterministic: if rerun without changes, produce consistent IDs and counts.
- LIMIT total findings in the main table to 50; aggregate remainder in a summarized overflow note.
- If zero issues found, emit a success report with coverage statistics and proceed recommendation.

Context: $ARGUMENTS
163 changes: 163 additions & 0 deletions .github/prompts/clarify.prompt.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,163 @@
---
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
---

# Clarify

The user input to you can be provided directly by the agent or as a command argument - you **MUST** consider it before proceeding with the prompt (if not empty).

User input:

$ARGUMENTS

Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.

Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.

Execution steps:

1. Run `.specify/scripts/powershell/check-prerequisites.ps1 -Json -PathsOnly` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields:
- `FEATURE_DIR`
- `FEATURE_SPEC`
- (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.)
- If JSON parsing fails, abort and instruct user to re-run `/specify` or verify feature branch environment.

2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).

Functional Scope & Behavior:
- Core user goals & success criteria
- Explicit out-of-scope declarations
- User roles / personas differentiation

Domain & Data Model:
- Entities, attributes, relationships
- Identity & uniqueness rules
- Lifecycle/state transitions
- Data volume / scale assumptions

Interaction & UX Flow:
- Critical user journeys / sequences
- Error/empty/loading states
- Accessibility or localization notes

Non-Functional Quality Attributes:
- Performance (latency, throughput targets)
- Scalability (horizontal/vertical, limits)
- Reliability & availability (uptime, recovery expectations)
- Observability (logging, metrics, tracing signals)
- Security & privacy (authN/Z, data protection, threat assumptions)
- Compliance / regulatory constraints (if any)

Integration & External Dependencies:
- External services/APIs and failure modes
- Data import/export formats
- Protocol/versioning assumptions

Edge Cases & Failure Handling:
- Negative scenarios
- Rate limiting / throttling
- Conflict resolution (e.g., concurrent edits)

Constraints & Tradeoffs:
- Technical constraints (language, storage, hosting)
- Explicit tradeoffs or rejected alternatives

Terminology & Consistency:
- Canonical glossary terms
- Avoided synonyms / deprecated terms

Completion Signals:
- Acceptance criteria testability
- Measurable Definition of Done style indicators

Misc / Placeholders:
- TODO markers / unresolved decisions
- Ambiguous adjectives ("robust", "intuitive") lacking quantification

For each category with Partial or Missing status, add a candidate question opportunity unless:
- Clarification would not materially change implementation or validation strategy
- Information is better deferred to planning phase (note internally)

3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
- Maximum of 5 total questions across the whole session.
- Each question must be answerable with EITHER:
* A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR
* A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words").
- Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
- Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
- If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.

4. Sequential questioning loop (interactive):
- Present EXACTLY ONE question at a time.
- For multiple‑choice questions render options as a Markdown table:

```markdown
| Option | Description |
|--------|-------------|
| A | <Option A description> |
| B | <Option B description> |
| C | <Option C description> (add D/E as needed up to 5) |
| Short | Provide a different short answer (<=5 words) (Include only if free-form alternative is appropriate) |
```

- For short‑answer style (no meaningful discrete options), output a single line after the question: `Format: Short answer (<=5 words)`.
- After the user answers:
* Validate the answer maps to one option or fits the <=5 word constraint.
* If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
* Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
- Stop asking further questions when:
* All critical ambiguities resolved early (remaining queued items become unnecessary), OR
* User signals completion ("done", "good", "no more"), OR
* You reach 5 asked questions.
- Never reveal future queued questions in advance.
- If no valid questions exist at start, immediately report no critical ambiguities.

5. Integration after EACH accepted answer (incremental update approach):
- Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.
- For the first integrated answer in this session:
* Ensure a `## Clarifications` section exists (create it just after the highest-level contextual/overview section per the spec template if missing).
* Under it, create (if not present) a `### Session YYYY-MM-DD` subheading for today.
- Append a bullet line immediately after acceptance: `- Q: <question> → A: <final answer>`.
- Then immediately apply the clarification to the most appropriate section(s):
* Functional ambiguity → Update or add a bullet in Functional Requirements.
* User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
* Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
* Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
* Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
* Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
- Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).
- Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.
- Keep each inserted clarification minimal and testable (avoid narrative drift).

6. Validation (performed after EACH write plus final pass):
- Clarifications session contains exactly one bullet per accepted answer (no duplicates).
- Total asked (accepted) questions ≤ 5.
- Updated sections contain no lingering vague placeholders the new answer was meant to resolve.
- No contradictory earlier statement remains (scan for now-invalid alternative choices removed).
- Markdown structure valid; only allowed new headings: `## Clarifications`, `### Session YYYY-MM-DD`.
- Terminology consistency: same canonical term used across all updated sections.

7. Write the updated spec back to `FEATURE_SPEC`.

8. Report completion (after questioning loop ends or early termination):
- Number of questions asked & answered.
- Path to updated spec.
- Sections touched (list names).
- Coverage summary table listing each taxonomy category with Status: Resolved (was Partial/Missing and addressed), Deferred (exceeds question quota or better suited for planning), Clear (already sufficient), Outstanding (still Partial/Missing but low impact).
- If any Outstanding or Deferred remain, recommend whether to proceed to `/plan` or run `/clarify` again later post-plan.
- Suggested next command.

Behavior rules:

- If no meaningful ambiguities found (or all potential questions would be low-impact), respond: "No critical ambiguities detected worth formal clarification." and suggest proceeding.
- If spec file missing, instruct user to run `/specify` first (do not create a new spec here).
- Never exceed 5 total asked questions (clarification retries for a single question do not count as new questions).
- Avoid speculative tech stack questions unless the absence blocks functional clarity.
- Respect user early termination signals ("stop", "done", "proceed").
- If no questions asked due to full coverage, output a compact coverage summary (all categories Clear) then suggest advancing.
- If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.

Context for prioritization: $ARGUMENTS
Loading
Loading