Skip to content

AgenticTesting/OpenTestingAI

Repository files navigation

🧪 Open Testing Ecosystem

🌍 The Open-Source Software Testing Knowledge Platform

Knowledge Base Techniques AI Agents Standards Templates

A structured knowledge base of 1,105 testing terms, 106 technique patterns, 15 IEEE standards, 108 output templates, and 7 specialized AI skills with 51 agents — built for Claude Cowork.

🚀 Getting Started · 📚 Knowledge Base · 🤖 Skills & AI Agents · 🕸️ Knowledge Graph · 🏗️ Architecture


📑 Table of Contents


🔎 Overview

The Open Testing Ecosystem is a comprehensive, standards-based software testing knowledge platform designed to run as a set of Claude Cowork skills. It encodes decades of testing knowledge from international standards, maturity models, and training curricula into a structured, queryable, AI-powered system.

✨ What It Does

  • Answers any testing question using 1,105 cross-referenced terms sourced from ISO 29119, TMMi, TMap, IEEE standards, ISTQB, and 20 training courses
  • Generates test strategies by mapping ISO 25010 quality characteristics to 106 test design techniques at risk-appropriate intensity levels
  • Creates test cases by applying formal technique procedures (BVA, EP, decision tables, state transitions, pairwise, and 100+ more)
  • Analyzes test coverage using TMap coverage groups (Process/Condition/Data/Appearance) with gap detection and recommendations
  • Produces professional reports including RAG dashboards, release advice, final test reports aligned to IEEE 829
  • Assesses process maturity against TMMi's 5 levels, 16 process areas, and 113 specific practices
  • Generates 108 templates spanning technique worksheets, test plans, charters, ARCI matrices, and more

🚀 Key Differentiators

Feature Open Testing Traditional Tools
Knowledge scope 1,105 terms, 4 standards bodies Vendor-specific terminology
Technique library 106 formal patterns with procedures 5-10 common techniques
Standards alignment ISO 29119, TMMi, TMap, 15 IEEE Partial or none
Risk-based intensity 3-level mapping per technique Binary pass/fail
AI-powered generation 51 specialized agents Manual effort
Output formats xlsx, docx, HTML, JSON, Gherkin Fixed formats
Knowledge graph JSON-LD, Cypher, RDF/Turtle, SPARQL No semantic layer

🚀 Getting Started

✅ Prerequisites

  • Claude Desktop with Cowork mode enabled
  • A workspace folder for output files

📦 Installation

  1. Clone this repository into your workspace:

    git clone https://github.com/agentictesting/open-testing.git
  2. Copy the skills into your Claude Cowork skills directory:

    cp -r open-testing/skills/* ~/.claude/skills/
  3. The knowledge base files stay in place — each skill references them via relative paths.

⚡ Quick Start

Open Claude Cowork, select the folder containing the Open Testing ecosystem, and try:

  • "Assess our testing maturity against TMMi" → Runs the MANAGES pipeline
  • "Create a test strategy for an e-commerce checkout" → Runs the STRATEGY pipeline
  • "Generate boundary value analysis test cases for age validation (0-150)" → Runs the CREATES pipeline
  • "Analyze our test coverage and find gaps" → Runs the COVERED pipeline
  • "Are we ready to release version 3.2?" → Runs the REPORTS pipeline
  • "Generate a decision table template" → Runs the TEMPLATE pipeline

📚 Knowledge Base

All knowledge base files are located in knowledge-base/.

🧠 Terminology

File: terminology.json1,105 terms

Every term includes:

Field Description
term Canonical term name
primary_description Definition with source attribution
categories Classification tags
knowledge_level Foundation / Advanced / Expert
sources Origin standards and courses
iso_29119_references Cross-references to ISO 29119 parts
tmmi_references Cross-references to TMMi process areas
tmap_references Cross-references to TMap topics
ieee_references Cross-references to IEEE standards
course_references Cross-references to training courses

🛠️ Technique Patterns

File: technique-patterns.json106 patterns
Individual files: techniques/ — 106 JSON files

Each technique pattern contains:

  • Metadata: Name, type (specification-based / structure-based / experience-based), coverage criterion
  • Procedure: Step-by-step instructions for applying the technique
  • Coverage criteria: What constitutes adequate coverage
  • Applicable test levels: Component, integration, system, acceptance
  • Test basis required: What input documents are needed
  • Automation suitability: Rating for automated application
  • Generation hints: AI prompts for generating test cases using this technique

🌳 Technique Taxonomy

graph TD
    TDT[Test Design Techniques]
    TDT --> SB[Specification-Based / Black-Box]
    TDT --> STB[Structure-Based / White-Box]
    TDT --> EB[Experience-Based]

    SB --> EP[Equivalence Partitioning]
    SB --> BVA[Boundary Value Analysis]
    SB --> DT[Decision Table Testing]
    SB --> STT[State Transition Testing]
    SB --> UCT[Use Case Testing]
    SB --> CT[Classification Tree Method]
    SB --> PW[Pairwise Testing]
    SB --> CEG[Cause-Effect Graphing]
    SB --> MORE1[...and 90+ more]

    STB --> BT[Branch Testing]
    STB --> ST[Statement Testing]
    STB --> MCDC[MC/DC Testing]
    STB --> PT[Path Testing]

    EB --> ET[Exploratory Testing]
    EB --> EG[Error Guessing]
    EB --> CBT[Checklist-Based Testing]
Loading

📏 Standards Coverage

The knowledge base encodes the following standards in full structured JSON:

📘 ISO/IEC/IEEE 29119 — Software Testing

Part Title Content (OSS)
Part 1 Concepts & Definitions Testing vocabulary, concepts
Part 2 Test Processes 36 processes (organizational, management, fundamental)
Part 3 Test Documentation Test plan, design, case, procedure, log templates
Part 4 Test Techniques 23 technique definitions with procedures
Part 5 Keyword-Driven Testing Framework, action words, data tables
Part 6 Body of Knowledge Knowledge areas mapped to processes
Part 7 Reviews Review types, processes, roles

📈 TMMi — Test Maturity Model integration

graph TD
    root["TMMi Maturity Model"]
    L1["Level 1: Initial"]
    L2["Level 2: Managed — 5 Process Areas"]
    L3["Level 3: Defined — 4 Process Areas"]
    L4["Level 4: Measured — 4 Process Areas"]
    L5["Level 5: Optimization — 3 Process Areas"]
    root --> L1 --> L2 --> L3 --> L4 --> L5
Loading

5 maturity levels, 16 process areas, 113 specific practices with specific goals and generic goals.

🧭 TMap

11 organizing topics + 9 performing topics covering the complete TMap body of knowledge.

🏛️ IEEE Standards (15)

Standard Title Key Content
IEEE 829-2008 Test & SQA Documentation 8 test document types, MTR/LTR structures
IEEE 1012-2016 Verification & Validation V&V processes, integrity levels
IEEE 1028-2008 Reviews & Audits 5 review types (management, technical, inspection, walkthrough, audit)
IEEE 730-2014 Software Quality Assurance 16 SQA activities in 3 groups
IEEE 1008-1987 Unit Testing 3-phase unit test process
IEEE 982.1-1988 Reliability Measures 39 reliability measures with formulas
IEEE 982.2-1988 Reliability Measure Application Application guide for 982.1 measures
IEEE 1044.1-1995 Anomaly Classification 4-step classification process
IEEE 24765-2017 Vocabulary 500+ terms (systems & software engineering)
IEEE 29148-2018 Requirements Engineering Requirements processes and information items
IEEE 16326-2019 Project Management PM processes for systems & software
IEEE 1016-2009 Software Design Design description information model
IEEE 1016.1-1993 Design Description Guide Practical design documentation guide
IEEE 729-1983 SE Terminology (superseded) Historical terminology baseline
IEEE 730.1-1995 SQA Plans Guide (superseded) Companion guide to IEEE 730

🎯 Quality & Technique Mappings

🔗 Quality-to-Technique Mapping

File: quality-to-technique.json

Maps all 8 ISO 25010 quality characteristics and their 30 sub-characteristics to recommended test techniques:

graph TD
    PQ[Product Quality - ISO 25010]
    PQ --> FS[Functional Suitability]
    PQ --> PE[Performance Efficiency]
    PQ --> CO[Compatibility]
    PQ --> US[Usability]
    PQ --> RE[Reliability]
    PQ --> SE[Security]
    PQ --> MA[Maintainability]
    PQ --> PO[Portability]

    FS --> |"EP, BVA, DT, UCT"| T1[78 technique mappings]
    PE --> |"Load, stress, endurance"| T1
    SE --> |"Pen test, fuzzing"| T1
Loading

Each sub-characteristic specifies:

  • Primary techniques: Most effective techniques for testing this quality
  • Secondary techniques: Supporting techniques
  • Coverage criteria: What constitutes adequate coverage

🎚️ Technique-to-Intensity Mapping

File: technique-to-intensity.json

Maps 20 core techniques to three intensity levels for risk-based test design:

Intensity Symbol Risk Class Coverage Target Effort Multiplier
Light C (Low) 60%+ 1.0x
Standard ●● B (Medium) 80%+ 1.5-2.0x
Thorough ●●● A (High) 95%+ 2.5-4.0x

Each technique entry defines specific guidance for what "light", "standard", and "thorough" mean (e.g., for BVA: light = boundary values only; standard = boundary + invalid partitions; thorough = boundary + invalid + special values + combinatorial).

🧾 Output Templates

File: template-analysis.json108 templates across 10 categories

Category Templates Examples
Technique 5 BVA worksheet, Decision table, State transition matrix
Strategy 12 Test strategy table, Quality engineering strategy, Intensity table
Test Execution 11 Test case log, Execution tracker, Defect summary
Exploratory 4 Session charter, Session report, Tour guide
Defect Management 4 Anomaly report, Triage form, Root cause analysis
Checklist 5 Testability checklist, Review checklist, Environment checklist
E2E Chain Testing 7 Chain card, Process cycle test, Integration map
Reporting 6 Progress report, Release advice, Final test report
Estimation 4 Test point analysis, Effort estimation, Resource plan
AI Prompt 1 AI test generation prompt template

🎓 Courses & Supporting Materials

Knowledge extracted from 20 training courses and supporting materials of 10,284 markdown source files:

Course Category Courses Content
ISTQB Foundation 1 6 modules, 26 learning objectives
ISTQB Advanced 3 Test Manager, Test Analyst, Technical Test Analyst
Specialist 5 Agile, Automation, Usability, Risk-Based, Performance
Technique-Focused 11 BVA, EP, DT, STT, CTM, Pairwise, Exploratory, and more

8 Knowledge Domains identified from supporting materials:

  1. Context-Driven Testing — Session-based test management, charters, tours
  2. Safety-Critical Testing — IEC 61508, DO-178C, autonomous vehicle testing
  3. Usability Testing — Think-aloud protocol, heuristic evaluation, accessibility
  4. Automation Engineering — Framework architecture, keyword-driven, data-driven
  5. Agile Testing — Whole team approach, continuous testing, BDD
  6. Risk-Based Testing — Product risk, project risk, risk quantification
  7. Test Measurement — Metrics, defect density, DRE, test effectiveness
  8. Classification Trees — Classification tree method, combinatorial testing

🤖 Skills & AI Agents

The ecosystem provides 7 Claude Cowork skills containing 51 specialized AI agents. Each skill follows a pipeline architecture where agents process data sequentially, enriching it at each stage.

graph LR
    INPUT[Requirements / Context]
    INPUT --> STRATEGY
    STRATEGY --> PLANNED
    PLANNED --> TEMPLATE
    STRATEGY --> CREATES
    TEMPLATE --> CREATES
    CREATES --> COVERED
    COVERED --> REPORTS
    REPORTS --> MANAGES
Loading

1️⃣ STRATEGY — open-testing-design

8 Agents for test strategy, risk analysis, and technique selection.

Agent Role Output
S_scope Scope Analyzer Test scope boundaries
T_technique Technique Selector Recommended techniques per quality/risk
R_risk Risk Analyzer Quality Risk Analysis (QRA)
A_approach Approach Designer Test strategy document
T_testability Testability Assessor Testability review report
E_estimation Effort Estimator Test effort estimates
G_governance Governance Designer ARCI matrix, QAM/QPM
Y_yield Yield Optimizer Strategy optimization

Example: "Create a test strategy for our payment gateway"
→ S_scope analyzes boundaries → R_risk performs QRA → T_technique selects techniques per risk class → A_approach produces the strategy document → E_estimation calculates effort

2️⃣ PLANNED — open-testing-plan

7 Agents for test planning, scheduling, and environment management.

Agent Role Output
P_plan Master Test Planner ISO 29119-3 compliant MTP
L_levels Level Planner Level Test Plans (unit, integration, system, acceptance)
A_allocation Resource Allocator Resource & schedule plan
N_nonfunctional NFR Planner Non-functional test matrix
N_network E2E Chain Planner End-to-end chain cards
E_environment Environment Planner Environment specifications
D_data Data Strategist Test data master sheet

Example: "Create a master test plan for Release 4.0"
→ P_plan generates MTP → L_levels creates level plans → A_allocation assigns resources → E_environment specs environments → D_data plans test data

3️⃣ TEMPLATE — open-testing-templates

8 Agents generating any of the 108 output templates.

Agent Role Template Categories
T_technique Technique Templates BVA worksheets, decision tables, state matrices
E_execution Execution Templates Test case logs, execution trackers
M_management Management Templates Defect forms, triage sheets
P_planning Planning Templates Strategy tables, estimation sheets
L_landscape Landscape Templates E2E chains, integration maps
A_accountability Accountability Templates ARCI matrices, QAM/QPM
T_traceability Traceability Templates RTM, coverage matrices
E_exploratory Exploratory Templates Session charters, tour guides

Example: "Generate a decision table template for login validation"
→ Routes to T_technique agent → Generates populated decision table with conditions, actions, and rules

4️⃣ CREATES — open-testing-create

7 Agents for generating concrete test cases from technique + context.

Agent Role Output
C_context Context Analyzer Parsed requirements, constraints
R_rules Rule Extractor Business rules, conditions, boundaries
E_enumerate Test Enumerator Enumerated test conditions
A_author Test Author Complete test cases with steps & expected results
T_trace Traceability Linker Requirement-to-test-case links
E_export Exporter xlsx, Gherkin (.feature), JSON output
S_score Quality Scorer Test case quality assessment

Example: "Apply BVA to age field accepting 18-65"
→ C_context parses constraints → R_rules extracts boundaries [17,18,19,64,65,66] → E_enumerate generates test conditions → A_author writes test cases → T_trace links to requirements → E_export produces xlsx

5️⃣ COVERED — open-testing-coverage

7 Agents for coverage analysis, gap detection, and optimization.

Agent Role Output
C_classify Coverage Classifier Tests classified by TMap coverage group
O_overlay Requirements Overlay RTM with coverage percentages
V_visualize Visualizer Heat maps, sunburst diagrams, radar charts
E_evaluate Evaluator Coverage vs. risk-based intensity targets
R_recommend Gap Recommender Techniques to close coverage gaps
E_export_cov Exporter xlsx dashboard, HTML report, JSON metrics
D_delta Delta Analyzer Sprint-over-sprint coverage regression

TMap Coverage Groups:

Group Focus Key Techniques
Process-oriented Business flows Process cycle test, use case, scenario testing
Condition-oriented Logic & decisions Decision table, cause-effect, MC/DC
Data-oriented Values & combinations EP, BVA, classification tree, pairwise
Appearance-oriented UI & UX Exploratory, usability, checklist-based

Example: "Analyze coverage for our login module against risk targets"
→ C_classify categorizes tests → O_overlay maps to requirements → E_evaluate checks intensity targets → R_recommend suggests gap-closing techniques → E_export_cov produces dashboard

6️⃣ REPORTS — open-testing-reporting

7 Agents for progress dashboards, release advice, and final reports.

Agent Role Output
R_realtime Progress Reporter RAG dashboard with sparkline trends
E_execution_status Execution Reporter Breakdown by level, type, priority, feature
P_product_quality Quality Assessor ISO 25010 quality assessment
O_outcome Release Advisor GO / CONDITIONAL GO / NO GO recommendation
R_risk_report Risk Reporter Risk mitigation progress report
T_trend Trend Analyzer Defect discovery/resolution convergence, velocity
S_summary Final Report Generator IEEE 829-aligned comprehensive test report

RAG Thresholds:

Metric GREEN AMBER RED
Execution Progress ≥90% 70-89% <70%
Pass Rate ≥95% 85-94% <85%
Blocked Tests ≤5% 5-15% >15%

Example: "Are we ready to release version 3.2?"
→ P_product_quality assesses against ISO 25010 → O_outcome evaluates exit criteria → R_risk_report checks residual risks → S_summary generates release advice document

7️⃣ MANAGES — open-testing-management

7 Agents for process maturity, defect management, and improvement.

Agent Role Output
M_maturity Maturity Assessor TMMi assessment (5 levels, 16 PAs, 113 practices)
A_anomaly Defect Lifecycle Manager Anomaly administration per IEEE 1044.1
N_norming Compliance Evaluator Process checklists vs. ISO/TMMi/TMap/IEEE
A_accountability_mgmt Accountability Agent ARCI matrices, QAM/QPM
G_governance_mgmt Governance Agent Quality policy, GTA, process descriptions
E_efficiency Efficiency Analyzer DDE, DRE, cost-per-defect, automation ROI
S_improvement Improvement Planner Prioritized roadmap with quick wins & strategic actions

Efficiency Metrics:

Metric Formula Target
Defect Detection Effectiveness Defects found in testing / Total defects × 100 >85%
Defect Removal Efficiency Defects removed before release / Total defects × 100 >90%
Defect Leakage Post-release defects / Total defects × 100 <10%
Automation ROI (Manual cost saved - Automation cost) / Automation cost >200%

Example: "Assess our testing maturity and create an improvement plan"
→ M_maturity evaluates against TMMi → N_norming checks standards compliance → E_efficiency analyzes metrics → S_improvement generates prioritized roadmap


🕸️ Knowledge Graph

The entire knowledge base is available as a queryable knowledge graph in multiple formats.

Location: knowledge-graph/exports/

Format File Size Use Case
JSON-LD testing-knowledge-graph.jsonld 992 KB Web applications, APIs
Neo4j Cypher testing-knowledge-graph.cypher 624 KB Graph database import
RDF/Turtle (TBox) testing-ontology.ttl 12 KB Ontology definition
RDF/Turtle (ABox) testing-knowledge.ttl 512 KB Instance data
SPARQL example-queries.sparql 8 KB Query examples

🔍 Example SPARQL Queries

# Find all techniques applicable to security testing
SELECT ?technique ?name WHERE {
  ?technique ot:applicableToQuality ot:Security ;
             ot:name ?name .
}

# Get TMMi Level 2 process areas and their practices
SELECT ?pa ?practice WHERE {
  ?pa ot:atMaturityLevel 2 ;
      ot:hasPractice ?practice .
}

📈 Mermaid Diagrams

Location: knowledge-graph/diagrams/

  • skill-pipeline.mermaid — Full skill interconnection diagram
  • technique-taxonomy.mermaid — Test technique classification tree
  • tmmi-maturity-model.mermaid — TMMi levels and process areas
  • iso25010-quality-model.mermaid — ISO 25010 quality characteristics
  • tmap-topics-model.mermaid — TMap organizing and performing topics
  • test-levels-vmodel.mermaid — V-model test levels
  • iso-29119-process-model.md — ISO 29119 process documentation

🏗️ Architecture

┌─────────────────────────────────────────────────────────────┐
│                    Claude Cowork Interface                  │
├─────────────────────────────────────────────────────────────┤
│                     7 Skills (51 Agents)                   │
│  ┌─────────┐ ┌─────────┐ ┌─────────┐ ┌─────────┐           │
│  │STRATEGY │→│PLANNED  │→│TEMPLATE │→│CREATES  │           │
│  │(8 agents)│ │(7 agents)│ │(8 agents)│ │(7 agents)│       │
│  └─────────┘ └─────────┘ └─────────┘ └─────────┘           │
│  ┌─────────┐ ┌─────────┐ ┌─────────┐                       │
│  │COVERED  │→│REPORTS  │→│MANAGES  │                       │
│  │(7 agents)│ │(7 agents)│ │(7 agents)│                   │
│  └─────────┘ └─────────┘ └─────────┘                       │
├─────────────────────────────────────────────────────────────┤
│                     Knowledge Base (3.6 MB)                │
│  ┌──────────────┐ ┌──────────────┐ ┌──────────────┐        │
│  │ 1,105 Terms  │ │106 Techniques│ │108 Templates │        │
│  └──────────────┘ └──────────────┘ └──────────────┘        │
│  ┌──────────────┐ ┌──────────────┐ ┌──────────────┐        │
│  │  Standards   │ │Quality Maps  │ │  Courses     │        │
│  │ISO/IEEE/TMMi │ │ISO 25010→Tech│ │  20 courses  │        │
│  └──────────────┘ └──────────────┘ └──────────────┘        │
├─────────────────────────────────────────────────────────────┤
│                  Knowledge Graph (3.1 MB)                  │
│         JSON-LD · Cypher · RDF/Turtle · SPARQL             │
└─────────────────────────────────────────────────────────────┘

🧬 Ontology

File: knowledge-base/ontology.json

The formal ontology defines 19 classes and 33 relationships covering the entire ecosystem:

🏷️ Classes

Class Instances Source
TestingConcept 1,105 terminology.json
TestDesignTechnique 106 technique-patterns.json
QualityCharacteristic 38 quality-to-technique.json
OutputTemplate 108 template-analysis.json
IEEEStandard 15 ieee-standards.json
CoverageGroup 4 coverage groups
IntensityLevel 3 Light / Standard / Thorough
RiskClass 3 A (High) / B (Medium) / C (Low)
Course 20 Training curricula
KnowledgeDomain 8 Specialist knowledge areas
ExercisePattern 10 Reusable exercise types
SkillAgent 51 AI agents across 7 skills
Skill 7 Claude Cowork skills

🔗 Relationship Groups

  • Technique relationships (8): hasTemplate, belongsToCoverageGroup, hasIntensityMapping, applicableAtLevel, requiresTestBasis, produces, definedIn, taughtIn
  • Quality relationships (3): testedBy, measuredBy, hasSubCharacteristic
  • Risk relationships (3): determinesIntensity, requiresTechniques, mitigatesRisk
  • Process relationships (5): produces, uses, definedIn, assessedAt, covers
  • Template relationships (4): usedByTechnique, belongsToCategory, producesArtifact, generatedBy
  • Skill relationships (5): belongsToSkill, generates, usesKnowledge, readsFrom, dependsOn
  • Standards relationships (4): defines, crossReferences, alignsWith, assessedBy
  • Learning relationships (5): teaches, hasExercise, practicesTechnique, contains, taughtIn

🗂️ Project Structure

Open-Testing/
├── README.md                          # This file
├── OPEN-TESTING-INDEX.md              # Detailed project index
│
├── knowledge-base/                    # 3.6 MB structured knowledge
│   ├── terminology.json               # 1,105 terms (master file)
│   ├── technique-patterns.json        # 106 technique patterns
│   ├── template-analysis.json         # 108 template definitions
│   ├── quality-to-technique.json      # ISO 25010 → technique mapping
│   ├── technique-to-intensity.json    # Technique → intensity levels
│   ├── ontology.json                  # Formal ontology (19 classes, 33 relationships)
│   ├── supporting-materials.json      # Course knowledge & domains
│   ├── taxonomy.json                  # Term taxonomy
│   ├── category-index.json            # Category index
│   ├── terms-list.json                # Quick-reference term list
│   │
│   ├── standards/
│   │   ├── iso-29119.json             # ISO 29119 (7 parts, 36 processes)
│   │   ├── iso-29119-parts/           # 7 individual part files
│   │   ├── tmmi.json                  # TMMi (5 levels, 16 PAs, 113 practices)
│   │   ├── tmmi-parts/                # 3 level-group files
│   │   ├── tmap.json                  # TMap HD (20 topics)
│   │   ├── ieee-standards.json        # 15 IEEE standards (master)
│   │   └── ieee-parts/                # 15 individual standard files
│   │
│   ├── techniques/                    # 106 individual technique JSON files
│   │   ├── equivalence-partitioning.json
│   │   ├── boundary-value-analysis.json
│   │   ├── decision-table-testing.json
│   │   └── ... (106 files)
│   │
│   └── courses/
│       ├── istqb-foundation.json      # 6 modules, 26 objectives
│       ├── test-techniques-courses.json
│       └── advanced-specialist-courses.json
│
├── knowledge-graph/                   # 3.1 MB graph exports
│   ├── exports/
│   │   ├── testing-knowledge-graph.jsonld   # JSON-LD (992 KB)
│   │   ├── testing-knowledge-graph.cypher   # Neo4j Cypher (624 KB)
│   │   ├── testing-ontology.ttl             # RDF/Turtle TBox (12 KB)
│   │   ├── testing-knowledge.ttl            # RDF/Turtle ABox (512 KB)
│   │   └── example-queries.sparql           # SPARQL examples (8 KB)
│   │
│   └── diagrams/
│       ├── skill-pipeline.mermaid
│       ├── technique-taxonomy.mermaid
│       ├── tmmi-maturity-model.mermaid
│       ├── iso25010-quality-model.mermaid
│       ├── tmap-topics-model.mermaid
│       ├── test-levels-vmodel.mermaid
│       └── iso-29119-process-model.md
│
├── skills/                            # 7 Claude Cowork skills
│   ├── open-testing-design/SKILL.md   # STRATEGY (8 agents)
│   ├── open-testing-plan/SKILL.md     # PLANNED (7 agents)
│   ├── open-testing-templates/SKILL.md # TEMPLATE (8 agents)
│   ├── open-testing-create/SKILL.md   # CREATES (7 agents)
│   ├── open-testing-coverage/SKILL.md # COVERED (7 agents)
│   ├── open-testing-reporting/SKILL.md # REPORTS (7 agents)
│   └── open-testing-management/SKILL.md # MANAGES (7 agents)
│
└── courses/                           # 42 source training course folders


🔌 Integration with Other Open Testing Skills

The Open Testing Ecosystem integrates with the broader Open-Test.AI skill family:

graph LR
    INPUT[Requirements / User Stories]
    INPUT --> DFS[OpenRequirements DeFOSPAM]
    DFS --> SBE[Specification by Example]
    DFS --> IEEE830[IEEE 830 SRS]
    DFS --> VIBE[Vibe Requirements]
    DFS --> PERF[Performance Engineering]
    DFS --> OTD[open-testing-design]
    OTD --> OTP[open-testing-plan]
    OTP --> OTT[open-testing-templates]
    OTD --> OTC[open-testing-create]
    OTT --> OTC
    SBE --> OTC
    OTC --> OTAI[OpenTestAI]
Loading

🤝 Contributing

Contributions are welcome. Areas where help is needed:

  • New technique patterns: Add techniques not yet in the 106-pattern library
  • Template improvements: Enhance or add output templates
  • Standards updates: As new editions of ISO/IEEE standards are published
  • Knowledge graph queries: Additional SPARQL/Cypher query examples
  • Translations: Terminology in additional languages
  • Integration: Connectors for test management tools (Jira, Azure DevOps, TestRail)

🛠️ How to Contribute

  1. Fork this repository
  2. Create a feature branch (git checkout -b feature/new-technique)
  3. Add or modify knowledge base files following the existing JSON schemas
  4. Run the validation script to ensure integrity
  5. Submit a pull request with a clear description

📄 License

This project is released under the OpenTest.AI

The knowledge base encodes concepts from published standards (ISO 29119, TMMi, IEEE). Sourced from local OSS models as part of AI Foundary (local) and Context Understanding. The underlying standards remain the property of their respective standards bodies.

1,105 terms · 106 techniques · 108 templates · 15 IEEE standards · 51 AI agents

About

Open AI Agentic testing ecosystem combining standards-based knowledge, formal test techniques, reusable templates, and specialized agents for strategy, planning, test creation, coverage, reporting, and management.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors