Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
@startuml highLevelDesignTestFlow
skinparam responseMessageBelowArrow true
title
DPMDP - Function testing flow
end title

skinparam responseMessageBelowArrow true


actor ApplicationOwner as AO
actor TestEngineer as TE
actor Implementer as IM
actor "ContinuousTesting/CI" as CI

participant "Function Spec\n(spec/Functions/**)" as SPEC
participant "Scenario + Fixtures\n(testing/<Function>/...)" as SCEN
participant "Jest Test Modules\n(generated)" as JEST
participant "Function Implementation\n(src/...)" as FCT
participant "Dependencies\n(mocked modules)" as DEP

== Authoring ==
AO -> SPEC : Write/maintain spec\n(inputs, outputs, dependencies)
TE -> SPEC : Read spec
TE -> SCEN : Create scenarios + fixtures\n(valid/invalid, expected outputs, errors, mocks)
TE -> JEST : Generate Jest modules\n(runner + auto-mocking)

== Local verification ==
IM -> FCT : Implement function according to spec
IM -> JEST : Run tests locally (npm test)

loop For each scenario
JEST -> SCEN : Load scenario + fixtures
JEST -> JEST : Resolve moduleConfig + errorMapping
JEST -> DEP : Install Jest mocks for all dependencies
JEST -> FCT : Call function with input fixture
alt success expected
FCT --> JEST : Return output
JEST -> JEST : Assert deep-equals expected output fixture
else error expected
FCT --> JEST : Throw/return error enum
JEST -> JEST : Assert exact error enum string
end
end

== PR validation ==
CI -> JEST : Run generated tests on every PR
CI -> CI : Publish JUnit XML + logs\nand enforce pass-as-gate
@enduml
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
312 changes: 108 additions & 204 deletions doc/TestingApplications/FunctionTesting/highLevelDesign.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,210 +27,114 @@ Function Testing automatically checks that each Function implementation ...
- **Implementer**: Applies the package of test cases and mock servers on its individual Function implementation
- **ContinuousTesting/ContinuousIntegration**: Binding the test packages into an automation chain and executing it with every pull request or merge.

### ??
### End-to-End Workflow

The ApplicationOwner is writing the spec ???.
The TestEngineer
- creates a description og the testing scenarios (,e.g. expected function, invalid inputs)

is creating jest.js modules from the ??? of the Functions that are consumed by the Function under test.


The Implementer executes the jest.js module for the Function under test.


for mocking the consumed Functions.


The ContinuousTesting/ContinuousIntegration



### Description of the Testing Scenarios

The scenarios are described in a yaml file.
The yaml file shall have the following naming "/testing/p1FunctionName/scenarios.yaml".
The scenarios.yaml file shall contain the following information:
- the input for the function under test
- the expected output for a given input
- the expected error for a given input
- the mocks






### Concept

Function Testing is **spec-driven** + **scenario-based**.

### Inputs
1. **Function spec**: `spec/Functions/**/interface.yaml` and `spec/Functions/**/variable.yaml`
This is the definition of the function. It provides:
- **Input schema**
- required fields
- types / formats (when declared)
- **Output schema**
- success output shape
- error output enum strings
- **Dependencies**
- `processing` steps define what the function calls (external calls or sub-functions)
2. **Scenario matrix: `scenarios.yaml` (one per function version)**

This is the list of test cases.
Each scenario defines:
- which **input fixture** is used
- how each dependency behaves during this scenario:
- return a fixture, or
- throw an error string
- what the function is expected to produce:
- expected success output fixture, or
- expected function-level error enum string
3. **Fixtures: JSON files**

Fixtures are committed JSON files used by scenarios:
- input fixtures (valid and invalid)
- dependency return payload fixtures
- expected (“golden”) output fixtures
4. **Configuration: `test-config.yaml`**

This maps spec dependency names to actual module paths and exports:
- function under test -> module path + export name
- dependency step name -> module path + export name
It allows the tests to be independent of repository structure changes.
5. **Error mapping: `error-mapping.yaml`**

Defines deterministic mapping from:
- “dependency failure string” -> “function error enum string”
This prevents ambiguous or inconsistent error handling between implementations.
### Sandbox Environment
The sandbox environment is a controlled, reproducible setup where Function Tests run the same locally and in CI.
**It includes** :
- a fixed Node.js + npm dependency set (pinned by lockfile / CI image)
- Jest as the test runner (npm test)
- committed test assets (scenarios + JSON fixtures + mappings)
- the function under test (once implemented)

**Key properties** :
- No external infrastructure is required .
- All dependencies listed in the spec processing section are mocked at module level (Jest module mocking).
## Mocking
### What is mocked?
**All dependencies** listed under `processing` are mocked.
The function under test is the only “real” code executed (once implemented).
Dependencies typically include:
- external calls (e.g., reading from MWDI/ ES)
- sub-functions called as steps
### How mocks are defined (data, not code)
Mocks are defined in `scenarios.yaml`.
Each scenario provides, per dependency:
- **return fixture path** OR **error to throw**
So the “mock implementation” is just:
- “for this step, load this JSON file and return it”
- or “throw this failure string”
### How mocks are applied (scenario runner behavior)
For each scenario test, the scenario runner performs:
1. **Load scenario**
- Read the scenario entry from `scenarios.yaml`.
2. **Resolve module paths**
- Using `test-config.yaml`, determine which module path corresponds to each dependency step.
3. **Install mocks**
- Use Jest module mocking (`jest.mock(modulePath)`) to replace each dependency module.
- Each mocked module’s exported function is configured to:
- **The ApplicationOwner** : writes the function specification, which defines inputs, outputs, and dependencies.
- **The TestEngineer** : Based on this specification creates the testing scenarios (including valid and invalid inputs, expected success outputs, and error cases) and generates the Jest test modules that automatically mock all dependent functions consumed by the function under test.
- **The Implementer** : Implements the function according to the specification and runs these Jest modules locally against their function implementation to verify correctness.
- **The ContinuousTesting/ContinuousIntegration** : Ensures that all generated tests are executed automatically for every pull request, providing reproducible and deterministic validation of the function behavior.

![Overview](./diagrams/highLevelDesignTestFlow.png)

### Test Package Structure (What exists per Function version)

For each Function version, the test package consists of:

- **Scenario definition**
- `testing/FunctionName/version/scenarios.yaml`: file acts as the test configuration, allowing new scenarios to be added without modifying the test code
- **Fixtures**
- The request payload passed to the function under test : `testing/FunctionName/version/scenarioId/input.json`
- The expected response produced by the function : `testing/FunctionName/version/scenarioId/output.json`
- The mocked response for a consumed sub-function, allowing tests to run without calling external services : `testing/FunctionName/version/scenarioId/p1ConsumedFunctionName.json`
- **Generated Jest test module(s)**
- one generated Jest test file per Function version
- `testing/FunctionName/version/tests/FunctionName.test.js`
- **Runtime environment**
- Node.js + npm dependencies

### Test File Generator
The test file generator creates the Jest test file for each Function version

- Path : `testing/tools/generateFunctionTests.js`

The generator works as follows:
- read `testing/FunctionName/version/scenarios.yaml`
- generate: `testing/FunctionName/version/tests/FunctionName.test.js`


### Scenario Definition (`scenarios.yaml`)

Scenarios are described in a YAML file:

- Path: `testing/FunctionName/version/scenarios.yaml`

The file contains:

- **Module configuration** : module path + export name for
- the Function under test
- each dependency step
- **Error mapping** : dependency failure string → Function-level error enum string

And for each scenario entry:

- **Scenario ID + description** : (unique, stable)
- **Input fixture reference**
JSON fixture located in `testing/FunctionName/version/scenarioId/input.json`
- **Expected outcome**
- expected success output fixture in `testing/FunctionName/version/scenarioId/output.json`, or
- expected error enum string (exactly as defined in the spec)
- **Mocks for dependencies**
For each dependency step from the spec `processing` section:
- return payload fixture in `testing/FunctionName/version/scenarioId/p1ConsumedFunctionName.json`, or
- error string to throw (mapped deterministically)


### Scenario Execution (Runner Logic)

**Requirement**: All processing steps to be mocked must be implemented as importable modules (adapters/helpers/sub-functions) so they can be mocked by Jest

The scenario execution logic is implemented once and reused by all generated Jest test files :

- Path: `testing/tools/functionTestRunner.js`

For each scenario ID, the runner performs:

- **Load test data**
- load `testing/FunctionName/version/scenarioId/input.json`
- load mock(s) fixtures referenced in `testing/FunctionName/version/scenarioId/p1ConsumedFunctionName.json`

- **Install dependency mocks**
- all dependencies listed under `processing` are mocked at module level (Jest module mocking)
- each mocked dependency is configured to:
- return the referenced fixture JSON, or
- throw the referenced error string.
4. **Execute function**
- Call the function under test with the input fixture.
5. **Assertions**
- If success: compare actual output to the expected success fixture (deep equality).
- If failure: ensure actual error equals the exact expected error enum string.
### Keeping error handling consistent (error mapping)
When a dependency throws a failure string, the function is expected to map it to a function-level error enum.
The mapping is defined in `error-mapping.yaml`.
This produces:
- consistent results across implementations
- stable error messages for callers
### Mock maintenance rules
When spec changes add or remove a `processing` step:
- update `test-config.yaml` so the runner can mock the new step
- update `scenarios.yaml` so each scenario defines behavior for the new step
- add fixtures for the new step’s expected return payloads
### Specification team input to support maximum automation
To enable maximum automation in creating mocks and test cases, the specification team (ApplicationOwner + TestEngineer) shall provide, per function version:
1. **Scenario matrix (`scenarios.yaml`)**
- happy path scenario(s)
- negative scenarios covering:
- all function-level input validation error enums
- representative dependency failure cases
- each scenario references fixtures and expected outcomes
2. **fixtures (JSON)**
- function input fixtures (valid + invalid)
- dependency output fixtures for each processing dependency:
- success payload examples
- error payload examples (if modeled as return values) or error triggers (if modeled as thrown errors)
- expected function output fixtures for success scenarios
3. **Error mapping rules (`error-mapping.yaml`)**
- explicit mapping from dependency failure messages to function-level error enums
- removes ambiguity and prevents inconsistent behavior across implementations
4. **Dependency configuration (`test-config.yaml` )**
- maps dependency (processing step names) to module path + export name
- maps function under test to its module path + export name
- makes test generation mechanical and repository-structure independent

![Overview](./diagrams/highLevelDesignDaigram.png)
### Test Cases
#### Design
- Test cases are coded and executed automatically.
- Language / framework:
- JavaScript (CommonJS) + Jest
- Structure:
- 1 generated Jest test file per function version
- 1 test per scenario ID
#### Automatic test case creation
Test cases are automatically generated from:
- `scenarios.yaml` ( list of test cases)
- `test-config.yaml` (module paths/exports)

![Overview](./diagrams/highLevelDesignTestExecution.png)
#### Reproducibility
**Goal:** A Function Test run must produce the **same result** (pass/fail and outputs) whenever it is executed against the same Git commit—locally or in CI.
We achieve this by applying the following rules:
1) **All test inputs are versioned**
- The complete test definition is stored in git:
- `scenarios.yaml` (which scenarios/tests exist and which dependency behavior to use)
- JSON fixtures (function inputs, dependency outputs, expected outputs)
- Therefore, a test run always uses the exact same test data for a given commit.
2) **No live dependencies**
- Function Tests SHALL NOT call real external systems (e.g., ES/Kafka/DB/HTTP services).
- All dependencies listed in the function spec `processing` section are mocked.
3) **Control variable inputs**
- This is done by:
- using fixed values in fixtures, and/or
- injecting or faking the variable source in tests.
**Outcome:** Tests are deterministic, reviewable, and reliable for PR gating because their behavior depends only on the committed code + committed test assets.
### Test Execution, Result Documentation and Acceptance Process
**Platform**
- Local execution for developers and TestEngineers
- CI execution for commit/PR creation (Jenkins)

**Execution**
- Command: `npm test`
- The test stage produces a binary result:
- pass: function behavior matches the specification for all scenarios
- fail: behavior deviates from specification, or test assets are incomplete

**Result documentation**

Artifacts produced per run:
- JUnit XML (CI-readable)
- logs including scenario ID and mock configuration (for traceability)

**Acceptance**

A function implementation is accepted when:
- all mandatory scenarios pass
- output fixtures match for success cases
- error enum strings match exactly for failure cases
- scenario coverage meets the agreed minimum
- throw the referenced error string

- **Execute the Function under test**
- call the real Function implementation with the loaded input fixture

- **Assert**
- success: deep-compare with `testing/FunctionName/version/scenarioId/output.json`
- error: exact match with the expected error enum string

### Deterministic Error Handling

Dependencies may fail with their own failure strings.
To ensure consistent Function behavior, the Function must map dependency failure strings to Function-level error enums.

This mapping is defined explicitly in `scenarios.yaml` so that:

- different implementations behave the same
- error messages remain stable for callers


### Maintenance Rules (When the Spec Changes)

If the spec changes (especially the `processing` section):

- update `scenarios.yaml` to define behavior for the new/changed step
- add/update fixtures for the new/changed step
- regenerate and re-run the Jest test modules




Loading
Loading