diff --git a/.gitattributes b/.gitattributes
index df18e443..762e5efe 100644
--- a/.gitattributes
+++ b/.gitattributes
@@ -1,4 +1,5 @@
/.github export-ignore
+/.task export-ignore
/bin export-ignore
/docs export-ignore
/example export-ignore
diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md
new file mode 100644
index 00000000..9676adfd
--- /dev/null
+++ b/.github/copilot-instructions.md
@@ -0,0 +1,1126 @@
+# Bashunit β Copilot Instructions
+
+> **Prime directive**: We practice **Test-Driven Development (TDD) by default**. Write a failing test first, make it fail **for the right reason**, implement the **smallest** change to pass, then **refactor** while keeping all tests green.
+
+> **π¨ MANDATORY WORKFLOW RULE (NO EXCEPTIONS) π¨:** **STOP! BEFORE READING FURTHER** - Create a task file `./.tasks/YYYY-MM-DD-feature-title.md` for **EVERY SINGLE CHANGE** including documentation updates, instruction modifications, bug fixes, new features, refactoring - **EVERYTHING**. Work within this file throughout the entire task, documenting all progress and thought process.
+
+> **π¨ MANDATORY WORKFLOW RULE (NO EXCEPTIONS) π¨:** **STOP! BEFORE READING FURTHER** - To finish any task the definition of done must be fully satisfied.
+
+> **Clarity Rule:** If acceptance criteria or the intended outcomes are not clear or ambiguous, **ask clarifying questions before making any change** and record the answers in the active task file.
+
+> **π External Developer Tools**: This repository includes `AGENTS.md` in the root directory with essential workflow information for external developer tools. When making significant changes to development workflow, TDD methodology, or core patterns, consider updating `AGENTS.md` to keep it synchronized with these comprehensive instructions.
+
+---
+
+## Cross-file synchronization with `AGENTS.md`
+
+To keep guidance coherent and up to date, we enforce a two-way sync policy:
+- When `copilot-instructions.md` changes, evaluate whether the change belongs in `AGENTS.md` and update `AGENTS.md` automatically if so.
+- When `AGENTS.md` changes, evaluate whether the change belongs in `copilot-instructions.md` and update this file automatically if so.
+- If a change is intentionally not mirrored, record the rationale in the active `./.tasks/YYYY-MM-DD-slug.md`.
+
+---
+
+## What this repository is
+
+An open-source **library** providing a fast, portable Bash testing framework: **bashunit**. It offers:
+
+* Minimal overhead, plain Bash test files.
+* Rich **assertions**, **test doubles (mock/spy)**, **data providers**, **snapshots**, **skip/todo**, **globals utilities**, **custom assertions**, **benchmarks**, and **standalone** runs.
+
+**Compatibility**: Bash 3.2+ (macOS, Linux, WSL). No external dependencies beyond standard Unix tools.
+
+---
+
+## π STEP 0: MANDATORY TASK FILE CREATION (READ THIS FIRST)
+
+**DO NOT PROCEED WITHOUT COMPLETING THIS STEP**
+
+### EVERY agent must do this BEFORE any work:
+
+1. **STOP and CREATE task file**: `.tasks/YYYY-MM-DD-feature-title.md` (in English)
+ - Example: `.tasks/2025-09-17-add-assert-json-functionality.md`
+ - Example: `.tasks/2025-09-17-fix-mock-cleanup-bug.md`
+ - Example: `.tasks/2025-09-17-update-documentation.md`
+ - Example: `.tasks/2025-09-17-enhance-copilot-instructions.md`
+
+2. **CHOOSE appropriate template**:
+ - **New user capability**: Use Template A (new assertions, CLI features, test doubles)
+ - **Internal modifications**: Use Template B (refactors, fixes, docs)
+
+3. **FILL task information immediately**: Complete all sections with specific acceptance criteria
+
+4. **WORK within this file throughout the task**: Update test inventory, track progress, document all decisions
+
+### β οΈ ABSOLUTE RULES - NO RATIONALIZATION ALLOWED:
+- **"The task is simple"** β **STILL REQUIRES TASK FILE**
+- **"It's just a bug fix"** β **STILL REQUIRES TASK FILE**
+- **"It's just documentation"** β **STILL REQUIRES TASK FILE**
+- **"I'm updating instructions"** β **STILL REQUIRES TASK FILE**
+- **"It's a tiny change"** β **STILL REQUIRES TASK FILE**
+
+**If you're reading this and haven't created a task file yet - STOP NOW and create one.**
+
+---
+
+## Learn from existing tests (essential reference)
+
+**Before writing any code, study the patterns in `./tests/`:**
+
+* `tests/unit/` - Unit tests for individual functions and modules (22+ test files)
+* `tests/functional/` - Integration tests for feature combinations (6 test files)
+* `tests/acceptance/` - End-to-end CLI and workflow tests (15+ test files)
+* `tests/benchmark/` - Performance and timing tests
+
+**Critical test files to study for patterns:**
+
+### Core assertion patterns (`tests/unit/assert_test.sh`)
+```bash
+# Data provider pattern with @data_provider comment
+# @data_provider provider_successful_assert_true
+function test_successful_assert_true() {
+ assert_empty "$(assert_true $1)"
+}
+
+function provider_successful_assert_true() {
+ data_set true
+ data_set "true"
+ data_set 0
+}
+
+# Testing assertion failures with expected console output
+function test_unsuccessful_assert_true() {
+ assert_same\
+ "$(console_results::print_failed_test\
+ "Unsuccessful assert true" \
+ "true or 0" \
+ "but got " "false")"\
+ "$(assert_true false)"
+}
+```
+
+### Setup/teardown patterns (`tests/unit/setup_teardown_test.sh`)
+```bash
+TEST_COUNTER=1
+
+function set_up_before_script() {
+ TEST_COUNTER=$(( TEST_COUNTER + 1 ))
+}
+
+function set_up() {
+ TEST_COUNTER=$(( TEST_COUNTER + 1 ))
+}
+
+function tear_down() {
+ TEST_COUNTER=$(( TEST_COUNTER - 1 ))
+}
+
+function tear_down_after_script() {
+ TEST_COUNTER=$(( TEST_COUNTER - 1 ))
+}
+
+function test_counter_is_incremented_after_setup_before_script_and_setup() {
+ assert_same "3" "$TEST_COUNTER"
+}
+```
+
+### Test doubles patterns (`tests/functional/doubles_test.sh`)
+```bash
+function test_mock_ps_when_executing_a_script() {
+ mock ps cat ./tests/functional/fixtures/doubles_ps_output
+
+ assert_match_snapshot "$(source ./tests/functional/fixtures/doubles_script.sh)"
+}
+
+function test_spy_commands_called_when_executing_a_sourced_function() {
+ source ./tests/functional/fixtures/doubles_function.sh
+ spy ps
+ spy awk
+ spy head
+
+ top_mem
+
+ assert_have_been_called ps
+ assert_have_been_called awk
+ assert_have_been_called head
+}
+
+function test_spy_commands_called_once_when_executing_a_script() {
+ spy ps
+ ./tests/functional/fixtures/doubles_script.sh
+ assert_have_been_called_times 1 ps
+}
+```
+
+### Data provider patterns (`tests/functional/provider_test.sh`)
+```bash
+function set_up() {
+ _GLOBAL="aa-bb"
+}
+
+# @data_provider provide_multiples_values
+function test_multiple_values_from_data_provider() {
+ local first=$1
+ local second=$2
+ assert_equals "${_GLOBAL}" "$first-$second"
+}
+
+function provide_multiples_values() {
+ echo "aa" "bb"
+}
+
+# @data_provider provide_single_values
+function test_single_values_from_data_provider() {
+ local data="$1"
+ assert_not_equals "zero" "$data"
+}
+
+function provide_single_values() {
+ echo "one"
+ echo "two"
+ echo "three"
+}
+```
+
+### CLI acceptance patterns (`tests/acceptance/bashunit_fail_test.sh`)
+```bash
+function set_up_before_script() {
+ TEST_ENV_FILE="tests/acceptance/fixtures/.env.default"
+ TEST_ENV_FILE_SIMPLE="tests/acceptance/fixtures/.env.simple"
+}
+
+function test_bashunit_when_a_test_fail_verbose_output_env() {
+ local test_file=./tests/acceptance/fixtures/test_bashunit_when_a_test_fail.sh
+
+ assert_match_snapshot "$(./bashunit --no-parallel --env "$TEST_ENV_FILE" "$test_file")"
+ assert_general_error "$(./bashunit --no-parallel --env "$TEST_ENV_FILE" "$test_file")"
+}
+```
+
+### Custom assertions (`tests/functional/custom_asserts.sh`)
+```bash
+function assert_foo() {
+ local actual="$1"
+ local expected="foo"
+
+ if [[ "$expected" != "$actual" ]]; then
+ bashunit::assertion_failed "$expected" "${actual}"
+ return
+ fi
+
+ bashunit::assertion_passed
+}
+
+function assert_positive_number() {
+ local actual="$1"
+
+ if [[ "$actual" -le 0 ]]; then
+ bashunit::assertion_failed "positive number" "${actual}" "got"
+ return
+ fi
+
+ bashunit::assertion_passed
+}
+```
+
+---
+
+## Before you touch any code
+
+1. **Read ADRs first**
+ * Review existing ADRs in the `adrs/` folder to understand decisions, constraints, and paved-road patterns.
+ * Current ADRs: error detection, booleans, parallel testing, metadata prefix, copilot instructions
+ * If your change introduces a significant decision, **create a new ADR** using `adrs/TEMPLATE.md`.
+
+2. **Create a task file (required)**
+ * Path: `./.tasks/YYYY-MM-DD-slug.md` (format: `YYYY-MM-DD-slug.md`)
+ * This file is **versioned** and is the single source of truth for your current task.
+
+3. **Study existing test patterns extensively**
+ * **Unit tests**: Look at `tests/unit/assert_test.sh`, `tests/unit/globals_test.sh`, `tests/unit/test_doubles_test.sh`
+ * **Functional tests**: Check `tests/functional/doubles_test.sh`, `tests/functional/provider_test.sh`
+ * **Acceptance tests**: Study `tests/acceptance/bashunit_test.sh`, `tests/acceptance/mock_test.sh`
+ * Follow established naming, structure, and assertion patterns exactly
+
+---
+
+## Double-Loop TDD
+
+We practice two nested feedback loops to deliver behavior safely and quickly.
+
+### Outer loop: acceptance first
+
+- Start from user value. For any new user-visible capability, write a high-level acceptance test that exercises the system through its public entry point (CLI, function API).
+- Keep the acceptance test red. It defines the next slice of behavior we must implement.
+- When the acceptance test is too broad, split it into thinner vertical slices that still provide visible progress.
+
+### Inner loop: design-driving tests
+
+- Drive the implementation with smaller tests created only when needed:
+ - Unit tests for individual functions and modules
+ - Functional tests for integration between components
+- Follow the classic cycle:
+ 1) **Red**: write a failing test for the next micro-behavior
+ 2) **Green**: write the minimum production code to pass
+ 3) **Refactor**: improve design in both production and tests while keeping all tests green
+
+### Test inventory and prioritization
+
+- Maintain a living test inventory in `./.tasks/Task.md` for the active task
+- Track acceptance tests, unit tests, and functional tests that define the capability
+- After every refactor, review the inventory. Add missing cases, then re-prioritize
+- The top priority is the test that is currently red
+
+### Important rules
+
+- **Never stop at tests only**: Always add the production code that actually uses the new behavior in the application flow
+- **Avoid speculative tests**: Write the next test only when a failing acceptance path or design pressure calls for it
+- **Keep tests deterministic**: No hidden time, randomness, or cross-test coupling
+- **Prefer observable behavior over internal structure**: If refactoring breaks a test without changing behavior, fix the test, not the refactor
+
+---
+
+## Bash coding standards (bashunit-specific)
+
+### Compatibility & Portability
+```bash
+# β
GOOD - Works on Bash 3.2+
+[[ -n "${var:-}" ]] && echo "set"
+array=("item1" "item2")
+
+# β BAD - Bash 4+ only
+declare -A assoc_array
+readarray -t lines < file
+```
+
+### Error handling & safety (observed patterns)
+```bash
+# β
GOOD - Safe parameter expansion (from tests)
+local param="${1:-}"
+[[ -z "${param}" ]] && return 1
+
+# β
GOOD - Function existence check (from globals_test.sh)
+function existing_fn(){
+ return 0
+}
+assert_successful_code "$(is_command_available existing_fn)"
+
+# β BAD - Unsafe expansion
+local param=$1 # fails if $1 is unset with set -u
+```
+
+### Function naming & organization (actual patterns)
+```bash
+# β
GOOD - Module namespacing (from actual codebase)
+function console_results::print_failed_test() { ... }
+function console_results::print_skipped_test() { ... }
+function console_results::print_incomplete_test() { ... }
+function state::add_assertions_failed() { ... }
+function helper::normalize_test_function_name() { ... }
+
+# β
GOOD - Test function naming (from actual tests)
+function test_successful_assert_true() { ... }
+function test_unsuccessful_assert_true_with_custom_message() { ... }
+function test_bashunit_when_a_test_fail_verbose_output_env() { ... }
+
+# Data provider naming (from functional/provider_test.sh)
+function provide_multiples_values() { ... }
+function provide_single_values() { ... }
+```
+
+### String handling & output (real examples)
+```bash
+# β
GOOD - Line continuation for readability (from assert_test.sh)
+assert_same\
+ "$(console_results::print_failed_test\
+ "Unsuccessful assert true" \
+ "true or 0" \
+ "but got " "false")"\
+ "$(assert_true false)"
+
+# β
GOOD - Proper quoting and color handling
+local colored=$(printf '\e[31mHello\e[0m World!')
+assert_empty "$(assert_match_snapshot_ignore_colors "$colored")"
+```
+
+---
+
+## Assertion patterns (real examples from tests/unit/assert_test.sh)
+
+### Complete assertion catalog (verified in codebase)
+```bash
+# Equality assertions
+assert_same "expected" "${actual}"
+assert_not_same "unexpected" "${actual}"
+assert_equals "expected" "${actual}" # alias for assert_same
+assert_not_equals "unexpected" "${actual}" # alias for assert_not_same
+
+# Truthiness assertions
+assert_true "command_or_function"
+assert_false "failing_command"
+assert_successful_code "command" # tests exit code 0
+assert_general_error "failing_command" # tests exit code != 0
+
+# String assertions
+assert_contains "needle" "${haystack}"
+assert_not_contains "needle" "${haystack}"
+assert_matches "^[0-9]+$" "${value}" # regex matching
+assert_string_starts_with "prefix" "${string}"
+assert_string_ends_with "suffix" "${string}"
+
+# Numeric assertions
+assert_greater_than 10 "${n}"
+assert_less_than 5 "${m}"
+assert_greater_or_equal_than 10 "${n}"
+assert_less_or_equal_than 5 "${m}"
+
+# Emptiness assertions
+assert_empty "${maybe_empty}"
+assert_not_empty "${something}"
+
+# File/directory assertions (from tests/unit/file_test.sh)
+assert_file_exists "${filepath}"
+assert_file_not_exists "${filepath}"
+assert_directory_exists "${dirpath}"
+assert_directory_not_exists "${dirpath}"
+
+# Array assertions (from tests/unit/assert_arrays_test.sh if exists)
+assert_array_contains "element" "${array[@]}"
+assert_array_not_contains "element" "${array[@]}"
+
+# Snapshot assertions (from tests/unit/assert_snapshot_test.sh)
+assert_match_snapshot "${output}"
+assert_match_snapshot "${output}" "custom_snapshot_name"
+assert_match_snapshot_ignore_colors "${colored_output}"
+```
+
+### Advanced assertion patterns (from real tests)
+```bash
+# Output capture and assertion (common pattern)
+assert_empty "$(assert_true true)" # success case produces no output
+
+# Multiple assertions on same output
+local output
+output="$(complex_function)"
+assert_contains "expected_part" "${output}"
+assert_not_contains "unexpected_part" "${output}"
+
+# Testing assertion failures (critical pattern from assert_test.sh)
+assert_same\
+ "$(console_results::print_failed_test\
+ "Test name" \
+ "expected_value" \
+ "but got " "actual_value")"\
+ "$(failing_assertion)"
+```
+
+---
+
+## Test doubles patterns (from tests/functional/doubles_test.sh & tests/unit/test_doubles_test.sh)
+
+### Mock patterns (with file fixtures)
+```bash
+function test_mock_with_file_content() {
+ # Mock with file content
+ mock ps cat ./tests/functional/fixtures/doubles_ps_output
+ assert_match_snapshot "$(source ./tests/functional/fixtures/doubles_script.sh)"
+}
+
+function test_mock_with_inline_content() {
+ # Mock with heredoc
+ mock ps<
+
+
+
diff --git a/adrs/adr-005-copilot-instruction-or-spec-kit.md b/adrs/adr-005-copilot-instruction-or-spec-kit.md new file mode 100644 index 00000000..1321ab89 --- /dev/null +++ b/adrs/adr-005-copilot-instruction-or-spec-kit.md @@ -0,0 +1,71 @@ +# Choose Copilot Custom Instructions over Spec Kit for bashunit + +* Status: proposed +* Deciders: @khru +* Date: 2025-09-17 + +Technical Story: We need a lightweight, high leverage AI assist that improves contribution quality and speed without adding process overhead to a small Bash library. + +## Context and Problem Statement + +bashunit is a compact open source Bash testing library. We want AI assistance that nudges contributors toward consistent style, portability, and test structure. Two candidates exist: GitHub Copilot Custom Instructions and GitHub Spec Kit. Which approach best fits bashunitβs size and workflow? + +## Decision Drivers + +* Keep contributor workflow simple and fast +* Enforce consistent Bash and test conventions with minimal tooling +* Reduce review friction and style nitpicks +* Avoid heavy bootstrapping or new runtime dependencies +* Leave room to explore structured specs later if needed + +## Considered Options + +* Copilot Custom Instructions at repository scope +* Spec Kit as the core workflow +* Hybrid approach: Copilot now, Spec Kit only for large initiatives + +## Decision Outcome + +Chosen option: "Copilot Custom Instructions at repository scope", because it delivers immediate guidance in Chat, coding agent, and code review with near zero overhead, matches bashunitβs scale, and supports path specific rules for Bash and docs. Spec Kit is valuable for multi phase feature work but introduces extra setup and process that bashunit does not currently need. + +### Positive Consequences + +* Faster, more consistent PRs with fewer style and portability fixes +* Guidance lives in the repo, visible and versioned with code +* Path specific rules help tailor guidance for `lib/`, `tests/`, and docs + +### Negative Consequences + +* Possible conflicts with personal or organization instructions, require clear precedence awareness +* Preview features in Copilot instructions can change, we must monitor docs + +## Pros and Cons of the Options + +### Copilot Custom Instructions at repository scope + +* Good, because setup is trivial, just add `.github/copilot-instructions.md` and optional `.github/instructions/*.instructions.md` +* Good, because guidance is applied in Chat, coding agent, and code review where contributors already work +* Good, because path based `applyTo` rules let us enforce Bash portability and test naming in specific folders +* Bad, because it is not a full specification or planning framework if we ever need complex multi step delivery + +### Spec Kit as the core workflow + +* Good, because it structures specs, plans, and tasks for complex features and parallel exploration +* Good, because it can coordinate with multiple agents and make specifications executable +* Bad, because it adds Python and `uv` dependencies plus a new CLI and multi step process +* Bad, because that overhead is unnecessary for a small Bash library with simple APIs and docs + +### Hybrid approach + +* Good, because we keep the repo light while reserving Spec Kit for large, time boxed initiatives +* Good, because it lets us validate Spec Kit on a real feature without changing the whole workflow +* Bad, because it introduces two patterns to maintain if used frequently +* Bad, because contributors may be unsure when to use which process without clear guidance + +## Links + +* Spec Kit repository: [https://github.com/github/spec-kit](https://github.com/github/spec-kit) +* Spec Kit blog overview: [https://github.blog/ai-and-ml/generative-ai/spec-driven-development-with-ai-get-started-with-a-new-open-source-toolkit/](https://github.blog/ai-and-ml/generative-ai/spec-driven-development-with-ai-get-started-with-a-new-open-source-toolkit/) +* Copilot repository instructions: [https://docs.github.com/es/copilot/how-tos/configure-custom-instructions/add-repository-instructions](https://docs.github.com/es/copilot/how-tos/configure-custom-instructions/add-repository-instructions) +* Copilot personal instructions: [https://docs.github.com/es/copilot/how-tos/configure-custom-instructions/add-personal-instructions](https://docs.github.com/es/copilot/how-tos/configure-custom-instructions/add-personal-instructions) +* Copilot organization instructions: [https://docs.github.com/es/copilot/how-tos/configure-custom-instructions/add-organization-instructions](https://docs.github.com/es/copilot/how-tos/configure-custom-instructions/add-organization-instructions)