A system that builds systems.
ICDEV is an AI-powered meta-builder that generates complete, autonomous applications — each with its own agent architecture, compliance automation, testing pipeline, and CI/CD integration. Describe what you need in plain English. Get an ATO-ready system with 42 compliance framework mappings, 15 coordinating AI agents, and every artifact you need for Authority to Operate.
These aren't templates. They're living systems that can build their own features.
One developer built this. Imagine what your team could do with it.
DISCLAIMER: This repository does NOT contain classified or Controlled Unclassified Information (CUI). Terms like "CUI", "SECRET", "IL4", "IL5", "IL6" appear throughout as configuration values and template strings — not as indicators that this repository itself is classified. Classification terminology references publicly available U.S. government standards (EO 13526, 32 CFR Part 2002, NIST SP 800-53). File headers containing
[TEMPLATE: CUI // SP-CTI]are template markers demonstrating the format ICDEV applies to generated artifacts.
Most developer tools help you write code faster. ICDEV does something fundamentally different: it generates entire applications — each with its own multi-agent architecture, compliance automation, testing pipeline, memory system, and CI/CD integration. The generated application isn't a starter kit. It's an autonomous engineering platform that can build its own features using the same methodology that built it.
GovProposal is the proof. ICDEV generated GovProposal — a complete government proposal lifecycle management platform with a 14-step section workflow, color team review cycle, compliance matrix, timeline tracking, and assignment management. Then ICDEV connected it to a GovCon Intelligence pipeline that automatically discovers government opportunities, extracts requirements, maps capabilities, and drafts proposal responses.
Together, they form a self-reinforcing flywheel:
SAM.gov RFPs → Mine requirement patterns → Map to ICDEV capabilities → Identify gaps →
Build enhancements → Draft proposals via GovProposal → Win → Deliver ICDEV on-prem → Repeat
ICDEV generated GovProposal the same way it generates any application — through the GOTCHA framework and ATLAS workflow. GovProposal inherited:
| What It Got | How It Works |
|---|---|
| 6-layer GOTCHA framework | Goals, Orchestration, Tools, Args, Context, Hard Prompts — separating deterministic logic from AI |
| Multi-agent architecture | 5 core agents (Orchestrator, Architect, Builder, Knowledge, Monitor) + 2 ATO agents |
| 229-table database | Append-only audit trail (NIST AU compliant), proposal lifecycle tables, compliance matrices |
| 42 compliance frameworks | Dual-hub crosswalk engine — implement a control once, map to FedRAMP, CMMC, CJIS, HIPAA, and 38 more |
| 9-step testing pipeline | Syntax → lint → unit → BDD → SAST → E2E → vision → acceptance → security gates |
| CI/CD integration | GitHub + GitLab dual-platform, webhook-triggered workflows |
| Memory system | Long-term facts, daily logs, semantic search — learns from every proposal cycle |
But GovProposal isn't just a child app. ICDEV then layered on the GovCon Intelligence pipeline — 11 specialized tools that automate the entire government contracting capture process:
┌─────────────────────────────────────────────────────────────────────────────┐
│ ICDEV — GovCon Intelligence │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────┐ ┌───────────┐ │
│ │ DISCOVER │───▶│ EXTRACT │───▶│ MAP │───▶│ DRAFT │ │
│ │ │ │ │ │ │ │ │ │
│ │ SAM.gov API │ │ "Shall/must/ │ │ Match reqs │ │ qwen3 │ │
│ │ scan opps + │ │ will" regex │ │ to ICDEV │ │ drafts → │ │
│ │ track awards │ │ extraction │ │ capability │ │ Claude │ │
│ │ │ │ + domain │ │ catalog │ │ reviews │ │
│ │ 8 NAICS │ │ classify │ │ (30 entries) │ │ │ │
│ │ codes │ │ + cluster │ │ L/M/N grade │ │ HITL gate │ │
│ └──────────────┘ └──────────────┘ └──────────────┘ └───────────┘ │
│ │ │ │ │ │
│ │ │ │ │ │
│ ▼ ▼ ▼ ▼ │
│ ┌─────────────────────────────────────────────────────────────────────┐ │
│ │ GovCon API Bridge (20+ endpoints) │ │
│ │ /sam/import → /auto-compliance → /auto-draft → /drafts/approve │ │
│ └─────────────────────────────────────────────────────────────────────┘ │
│ │ │
└────────────────────────────────────┼───────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────────┐
│ GovProposal — Proposal Lifecycle Platform │
│ │
│ ┌─────────────────┐ ┌──────────────────┐ ┌──────────────────────────┐ │
│ │ OPPORTUNITIES │ │ SECTIONS │ │ COMPLIANCE MATRIX │ │
│ │ │ │ │ │ │ │
│ │ proposal_ │ │ 14-step pipeline:│ │ L → compliant │ │
│ │ opportunities │ │ not_started → │ │ M → partial │ │
│ │ (imported from │ │ outlining → │ │ N → non_compliant │ │
│ │ SAM.gov scan) │ │ drafting → │ │ │ │
│ │ │ │ reviewing → │ │ Auto-populated from │ │
│ │ licensing_model: │ │ final → │ │ capability mapping │ │
│ │ on_prem_free | │ │ submitted │ │ scores │ │
│ │ saas_paid | │ │ │ │ │ │
│ │ negotiated │ │ AI drafts → │ │ Covers all "shall" │ │
│ │ │ │ human approves → │ │ statements extracted │ │
│ │ │ │ section content │ │ from RFP │ │
│ └─────────────────┘ └──────────────────┘ └──────────────────────────┘ │
│ │
│ ┌─────────────────┐ ┌──────────────────┐ ┌──────────────────────────┐ │
│ │ COLOR TEAM │ │ TIMELINE │ │ ASSIGNMENT MATRIX │ │
│ │ REVIEWS │ │ │ │ │ │
│ │ │ │ Gantt chart │ │ Who writes what │ │
│ │ Pink → Red → │ │ milestones, │ │ per-section role │ │
│ │ Gold → White → │ │ deadlines, │ │ tracking, workload │ │
│ │ Final │ │ countdown │ │ balancing │ │
│ └─────────────────┘ └──────────────────┘ └──────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────────────────────────┐
│ CROSS-ENGINE INTELLIGENCE │
│ │
│ ┌────────────────────────┐ ┌────────────────────────────────┐ │
│ │ Innovation Engine │ │ Creative Engine │ │
│ │ │ │ │ │
│ │ SAM.gov requirement │ │ Award leaderboard data → │ │
│ │ patterns registered │ │ competitive gap analysis │ │
│ │ as innovation signals │ │ against government │ │
│ │ │ │ contractors │ │
│ │ Enables: "Is cATO │ │ │ │
│ │ appearing more in │ │ Enables: identify where │ │
│ │ RFPs this quarter?" │ │ competitors are winning │ │
│ └────────────────────────┘ └────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────────────────┘
Step by step:
-
DISCOVER — ICDEV scans SAM.gov's Opportunities API for solicitations, pre-solicitations, and RFIs across 8 NAICS codes. Award notices feed competitive intelligence.
-
EXTRACT — Deterministic regex extracts every "shall", "must", and "will" statement from RFP descriptions. Each is domain-classified (DevSecOps, AI/ML, ATO/RMF, Cloud, Security, Compliance, Agile, Data, Management) and clustered into patterns using keyword fingerprinting.
-
MAP — Extracted requirements are matched against ICDEV's declarative capability catalog (~30 entries covering 42 compliance frameworks, 15 agents, and 500+ tools). Each requirement gets an L/M/N grade:
- L (≥ 80% coverage) — ICDEV fully meets this requirement
- M (40–79%) — partial capability, enhancement recommended
- N (< 40%) — gap identified, cross-registered to Innovation Engine for prioritized development
-
DRAFT — Two-tier LLM pipeline: qwen3 generates a compact draft incorporating capability evidence, tool references, and compliance controls. Claude reviews and polishes. The draft is stored with
status='draft'— a human must approve before it enters the proposal. -
BRIDGE — The GovCon API (20+ REST endpoints) moves data from ICDEV's intelligence tools into GovProposal's lifecycle tables:
- SAM.gov opportunities →
proposal_opportunities(with licensing model tracking) - "Shall" statements →
proposal_compliance_matrix(L/M/N auto-populated) - AI drafts →
proposal_section_drafts→ human approves →proposal_sections
- SAM.gov opportunities →
-
LIFECYCLE — GovProposal manages the rest: 14-step section workflow, color team reviews (Pink → Red → Gold → White → Final), timeline tracking with countdown to submission, assignment matrix, and compliance matrix with donut/bar charts.
This isn't a linear pipeline — it's a compounding cycle:
- Win a contract → customer gets ICDEV deployed on-prem (free for winners)
- Deliver → ICDEV's capabilities proven in production = stronger past performance evidence
- Learn → requirement patterns from real contracts improve capability mapping
- Build → gaps identified by the MAP stage drive ICDEV development priorities
- Repeat → next proposal has better evidence, higher L/M/N scores, stronger drafts
Every proposal ICDEV writes makes the next one better. The product IS the proposal evidence.
| Challenge | How ICDEV Solves It | Benefit |
|---|---|---|
| Monitoring SAM.gov is manual and error-prone | Automated scanning of 8 NAICS codes with deduplication and caching | Never miss an opportunity. Surface patterns across hundreds of RFPs. |
| Compliance matrices take weeks to populate | L/M/N auto-grading from capability catalog with keyword-overlap scoring | Compliance matrix populated in seconds, not weeks. Fully auditable. |
| Proposal writing is expensive ($50K–$500K per response) | Two-tier LLM drafting with reusable knowledge base and HITL approval | Draft responses in hours with evidence baked in. Human reviews, not writes from scratch. |
| No visibility into competitive landscape | Award tracker + competitor profiler from SAM.gov award data | Know who wins what, at what value, in which NAICS codes. |
| Past performance is hard to articulate | ICDEV's own capability catalog IS the evidence | "We have 42 compliance frameworks" isn't marketing — it's SELECT COUNT(*) from the same DB. |
| Challenge | How ICDEV Helps | Benefit |
|---|---|---|
| Proposals claim capabilities they can't deliver | ICDEV's proposals reference actual tools, actual test results, actual compliance mappings | Every claim is verifiable against the delivered platform. |
| ATO takes 12–18 months after award | ICDEV generates ATO artifacts (SSP, POAM, STIG, SBOM, OSCAL) automatically | ATO acceleration from day one of delivery. cATO-ready. |
| Vendor lock-in | ICDEV is open source (AGPL-3.0), runs on 6 cloud providers or fully air-gapped | No proprietary dependencies. Full source code. Deploy anywhere. |
| Difficulty evaluating technical depth | L/M/N grading is deterministic and reproducible | Same input always produces same compliance grade. Auditable. |
-
The product writes its own proposals. ICDEV generates the application AND writes the proposal to sell it. The capability evidence in the proposal comes from the same codebase that gets delivered. No other GovCon tool is simultaneously the proposal platform and the delivered product.
-
Deterministic compliance grading. Every "shall" statement in an RFP gets a machine-scored coverage grade (L/M/N) against a declarative capability catalog. This isn't LLM-generated opinion — it's keyword-overlap scoring that produces identical results every time. Air-gap safe.
-
Cross-engine intelligence. SAM.gov data doesn't just feed proposals. Requirement patterns flow into the Innovation Engine for trend detection ("is cATO appearing more in RFPs?"). Award data flows into the Creative Engine for competitive positioning. Three engines sharing intelligence, each getting smarter.
-
42 compliance frameworks, one implementation. Implement a NIST 800-53 control once. The dual-hub crosswalk engine automatically maps it to FedRAMP, CMMC, CJIS, HIPAA, PCI DSS, ISO 27001, and 35+ more. This works for proposals too — the compliance matrix covers every framework the RFP requires.
-
Self-reinforcing economics. Winners get ICDEV deployed free on-prem. This means every win creates a production reference, every production deployment generates telemetry that improves the next proposal, and every gap identified during delivery becomes a development priority. Commercial competitors charge for both the proposal tool AND the delivered platform. ICDEV is both.
-
Air-gap native. Every tool works without internet access. Regex-based requirement extraction (not LLM). Keyword-overlap scoring (not embeddings). SQLite database (not cloud). Ollama for local LLM inference. Designed for SIPR/JWICS from day one.
Most GovTech teams spend 12-18 months and millions of dollars getting from "we need an app" to a signed ATO. ICDEV compresses this into a single, auditable pipeline:
"We need a mission planning tool for IL5"
│
▼
┌─ INTAKE ──────────────────────────────────────────────┐
│ AI-driven conversational requirements gathering │
│ → Extracts requirements, detects gaps, flags ATO risk │
│ → Scores readiness across 5 dimensions │
│ → Auto-detects applicable compliance frameworks │
└───────────────────────────┬───────────────────────────┘
▼
┌─ SIMULATE ────────────────────────────────────────────┐
│ Digital Program Twin — what-if before you build │
│ → 6-dimension simulation (schedule, cost, risk, │
│ compliance, technical, staffing) │
│ → Monte Carlo estimation (10,000 iterations) │
│ → 3 Courses of Action: Speed / Balanced / Full │
└───────────────────────────┬───────────────────────────┘
▼
┌─ GENERATE ────────────────────────────────────────────┐
│ Full application in 12 deterministic steps │
│ → 300+ files: agents, tools, goals, tests, CI/CD │
│ → 229-table database with append-only audit trail │
│ → GOTCHA framework + ATLAS workflow baked in │
│ → Connected to 100+ cloud MCP servers (AWS/Azure/GCP/OCI/IBM) │
└───────────────────────────┬───────────────────────────┘
▼
┌─ BUILD ───────────────────────────────────────────────┐
│ TDD workflow: RED → GREEN → REFACTOR │
│ → 6 languages: Python, Java, Go, Rust, C#, TypeScript │
│ → 9-step test pipeline (unit → BDD → E2E → gates) │
│ → SAST, dependency audit, secret detection, SBOM │
└───────────────────────────┬───────────────────────────┘
▼
┌─ COMPLY ──────────────────────────────────────────────┐
│ ATO package generated automatically │
│ → SSP covering 17 FIPS 200 control families │
│ → POAM, STIG checklist, SBOM, OSCAL artifacts │
│ → Crosswalk maps controls across all 42 frameworks │
│ → cATO monitoring with evidence freshness tracking │
└───────────────────────────┬───────────────────────────┘
▼
ATO-ready application
Every step is auditable. Every artifact is traceable. Every control is mapped.
You describe what you need in plain English. ICDEV's Requirements Analyst agent runs a conversational intake session that:
-
Extracts requirements automatically — categorized into 6 types (functional, non-functional, security, compliance, interface, data) at 4 priority levels
-
Detects ambiguities — 7 pattern categories flag vague language ("as needed", "TBD", "etc.") for clarification
-
Flags ATO boundary impact — every requirement is classified into 4 tiers:
- GREEN — no boundary change
- YELLOW — minor adjustment (SSP addendum)
- ORANGE — significant change (ISSO review required)
- RED — ATO-invalidating (full stop, alternative COAs generated)
-
Auto-detects compliance frameworks — mentions of "HIPAA", "CUI", "CJIS", etc. trigger the applicable assessors
-
Scores readiness across 5 weighted dimensions:
Dimension Weight What It Measures Completeness 25% Requirement types covered, total count vs target Clarity 25% Unresolved ambiguities, conversational depth Feasibility 20% Timeline, budget, and team indicators present Compliance 15% Security requirements and framework selection Testability 15% Requirements with acceptance criteria Score ≥ 0.7 → proceed to decomposition. Score ≥ 0.8 → proceed to COA generation.
-
Decomposes into SAFe hierarchy — Epic → Capability → Feature → Story → Enabler, each with WSJF scoring, T-shirt sizing, and auto-generated BDD acceptance criteria (Gherkin)
Before writing a single line of code, ICDEV simulates the program across 6 dimensions:
- Schedule — Monte Carlo with 10,000 iterations, P50/P80/P95 confidence intervals
- Cost — $125-200/hr blended rate × estimated effort, low/high ranges
- Risk — probability × impact register, categorized by NIST risk factors
- Compliance — NIST controls affected, framework coverage gaps
- Technical — architecture complexity, integration density
- Staffing — team size, ramp-up timeline, skill requirements
Then generates 3 Courses of Action:
| COA | Scope | Timeline | Cost | Risk |
|---|---|---|---|---|
| Speed | P1 requirements only (MVP) | 1-2 PIs | S-M | Higher |
| Balanced | P1 + P2 requirements | 2-3 PIs | M-L | Moderate |
| Comprehensive | Full scope | 3-5 PIs | L-XL | Lowest |
Each COA includes an architecture summary, PI roadmap, risk register, compliance impact analysis, resource plan, and cost estimate. RED-tier requirements automatically get alternative COAs that achieve the same mission intent within the existing ATO boundary.
This is where ICDEV does what no other tool does. From the approved blueprint, it generates a complete, working application in 12 deterministic steps:
| Step | What Gets Generated |
|---|---|
| 1. Directory Tree | 40+ directories following GOTCHA structure |
| 2. Tools | All deterministic Python scripts, adapted with app-specific naming and ports |
| 3. Agent Infrastructure | 5-7 AI agent definitions with Agent Cards, MCP server stubs, config |
| 4. Memory System | MEMORY.md, daily logs, SQLite database, semantic search capability |
| 5. Database | Standalone init script creating capability-gated tables |
| 6. Goals & Hard Prompts | 8 essential workflow definitions, adapted for the child app |
| 7. Args & Context | YAML config files, compliance catalogs, language profiles |
| 8. A2A Callback Client | JSON-RPC client for parent-child communication |
| 9. CI/CD | GitHub + GitLab pipelines, slash commands, .gitignore, requirements.txt |
| 10. Cloud MCP Config | Connected to 100+ cloud-provider MCP servers (AWS, Azure, GCP, OCI, IBM) |
| 11. CLAUDE.md | Dynamic documentation (Jinja2) — only documents present capabilities |
| 12. Audit & Registration | Logged to append-only audit trail, registered in child registry, genome manifest |
The generated application isn't a template. It's a living system with its own GOTCHA framework, ATLAS workflow, multi-agent architecture, memory system, compliance automation, and CI/CD pipeline. It inherits ICDEV's capabilities but is independently deployable.
Before generation, ICDEV scores fitness across 6 dimensions to determine the right architecture:
| Dimension | Weight | What It Measures |
|---|---|---|
| Data Complexity | 10% | CRUD vs event-sourced vs graph models |
| Decision Complexity | 25% | Workflow branching, ML inference, classification |
| User Interaction | 20% | NLQ, conversational UI, dashboards |
| Integration Density | 15% | APIs, webhooks, multi-agent mesh |
| Compliance Sensitivity | 15% | CUI/SECRET, FedRAMP, CMMC, FIPS requirements |
| Scale Variability | 15% | Burst traffic, auto-scaling, real-time streaming |
Score ≥ 6.0 → full agent architecture. 4.0–5.9 → hybrid. < 4.0 → traditional.
Every feature is built using the ATLAS workflow with true TDD:
[Model] → Architect → Trace → Link → Assemble → [Critique] → Stress-test
The optional ATLAS Critique phase runs multi-agent adversarial review between Assemble and Stress-test. Security, Compliance, and Knowledge agents independently critique the plan in parallel, producing GO/NOGO/CONDITIONAL consensus before stress-testing begins.
The 9-step testing pipeline runs automatically:
- py_compile — syntax validation
- Ruff — linting (replaces flake8 + isort + black)
- pytest — unit/integration tests with coverage
- behave — BDD scenario tests from generated Gherkin
- Bandit — SAST security scan
- Playwright — E2E browser tests
- Vision validation — LLM-based screenshot analysis
- Acceptance validation — criteria verification against test evidence
- Security gates — CUI markings, STIG (0 CAT1), secret detection
ICDEV generates every artifact you need for ATO:
- System Security Plan (SSP) — covers all 17 FIPS 200 control families (AC, AT, AU, CA, CM, CP, IA, IR, MA, MP, PE, PL, PS, RA, SA, SC, SI) with dynamic baseline selection from FIPS 199 categorization
- Plan of Action & Milestones (POAM) — auto-populated from scan findings
- STIG Checklist — mapped to application technology stack
- Software Bill of Materials (SBOM) — CycloneDX format, regenerated every build
- OSCAL artifacts — machine-readable, validated against NIST Metaschema
- Control crosswalks — implement AC-2 once, ICDEV maps it to FedRAMP, CMMC, 800-171, CJIS, HIPAA, PCI DSS, ISO 27001, and 35+ more
- cATO evidence — continuous monitoring with freshness tracking and automated evidence collection
- eMASS sync — push/pull artifacts to eMASS
The dual-hub crosswalk engine eliminates duplicate assessments:
┌─────────────────┐
│ NIST 800-53 │ ← US Hub
│ Rev 5 │
└────────┬────────┘
┌────────────────┼────────────────┐
│ │ │
┌────┴────┐ ┌────┴────┐ ┌────┴────┐
│FedRAMP │ │ CMMC │ │800-171 │
│Mod/High │ │ L2/L3 │ │ Rev 2 │
└─────────┘ └─────────┘ └─────────┘
│ │
┌────┴────┐ ┌────┴────┐
│ CJIS │ │ HIPAA │ ...and 15+ more
│ HITRUST │ │ PCI DSS │
│ SOC 2 │ │ISO27001 │ ← Bridge to Int'l Hub
└─────────┘ └─────────┘
# Install ICDEV
pip install icdev
# Add LLM providers (pick what you need)
pip install icdev[llm] # OpenAI, Anthropic, Bedrock, Gemini, Ollama
pip install icdev[full] # Everything: all LLM providers + search + testing + security
# Initialize databases (234 tables)
icdev-init-db
# Start the dashboard
icdev-dashboard
# → http://localhost:5000
# Start the unified MCP server (241 tools for Claude Code / AI IDEs)
icdev-mcpAvailable extras:
| Extra | What it adds |
|---|---|
icdev[llm] |
OpenAI, Anthropic, Bedrock, Google GenAI, Ollama |
icdev[llm-azure] |
Azure OpenAI |
icdev[llm-vertex] |
Google Vertex AI |
icdev[llm-oci] |
Oracle Cloud GenAI |
icdev[llm-ibm] |
IBM watsonx.ai |
icdev[llm-all] |
All LLM providers |
icdev[search] |
Semantic + keyword search (numpy, rank_bm25) |
icdev[testing] |
pytest, behave, ruff, pydantic |
icdev[security] |
bandit, pip-audit, detect-secrets, cyclonedx-bom |
icdev[full] |
Everything above |
# Clone and install
git clone https://github.com/icdev-ai/icdev.git
cd icdev
pip install -r requirements.txt
# Initialize databases (234 tables)
python tools/db/init_icdev_db.py
# Start the dashboard
python tools/dashboard/app.py
# → http://localhost:5000# Interactive wizard
python tools/installer/installer.py --interactive
# Profile-based (pick your mission)
python tools/installer/installer.py --profile dod_team --compliance fedramp_high,cmmc
python tools/installer/installer.py --profile healthcare --compliance hipaa,hitrust
python tools/installer/installer.py --profile isv_startup --platform docker# Assess fitness for agentic architecture
python tools/builder/agentic_fitness.py --spec "Mission planning tool for IL5 with CUI markings" --json
# Generate blueprint from scorecard
python tools/builder/app_blueprint.py --fitness-scorecard scorecard.json \
--user-decisions '{}' --app-name "mission-planner" --json
# Generate the full application (12 steps, 300+ files)
python tools/builder/child_app_generator.py --blueprint blueprint.json \
--project-path ./output --name "mission-planner" --json/icdev-intake # Start conversational requirements intake
/icdev-simulate # Run Digital Program Twin simulation
/icdev-agentic # Generate the full application
/icdev-build # TDD build (RED → GREEN → REFACTOR)
/icdev-comply # Generate ATO artifacts
/icdev-transparency # AI transparency & accountability audit
/icdev-accountability # AI accountability — oversight, CAIO, appeals, incidents
/audit # 33-check production readiness audit| Category | Frameworks |
|---|---|
| Federal | NIST 800-53 Rev 5, NIST 800-171, FedRAMP (Moderate/High/20x), CMMC Level 2/3, FIPS 199/200, CNSSI 1253 |
| DoD | DoDI 5000.87 DES, MOSA (10 U.S.C. §4401), CSSP (DI 8530.01), cATO Monitoring |
| Healthcare | HIPAA Security Rule, HITRUST CSF v11 |
| Financial | PCI DSS v4.0, SOC 2 Type II |
| Law Enforcement | CJIS Security Policy |
| International | ISO/IEC 27001:2022, ISO/IEC 42001:2023, EU AI Act (Annex III) |
| AI/ML Security | NIST AI RMF 1.0, MITRE ATLAS, OWASP LLM Top 10, OWASP Agentic AI, OWASP ASI, SAFE-AI |
| AI Transparency | OMB M-25-21 (High-Impact AI), OMB M-26-04 (Unbiased AI), NIST AI 600-1 (GenAI), GAO-21-519SP (AI Accountability) |
| Architecture | NIST 800-207 Zero Trust, CISA Secure by Design, IEEE 1012 IV&V |
| Explainability | XAI Compliance, Model Cards, System Cards, Confabulation Detection, Fairness Assessment |
| Tier | Agents | Role |
|---|---|---|
| Core | Orchestrator, Architect | Task routing, system design |
| Domain | Builder, Compliance, Security, Infrastructure, MBSE, Modernization, Requirements Analyst, Supply Chain, Simulation, DevSecOps/ZTA, Gateway | Specialized domain work |
| Support | Knowledge, Monitor | Self-healing, observability |
Agents communicate via A2A protocol (JSON-RPC 2.0 over mutual TLS). Each publishes an Agent Card at /.well-known/agent.json. Workflows use DAG-based parallel execution with domain authority vetoes.
Orchestration Controls:
- Dispatcher mode — Orchestrator delegates only, never executes tools directly (GOTCHA enforcement)
- Declarative prompt chains — YAML-driven sequential LLM-to-LLM reasoning (plan → critique → refine)
- Session purpose tracking — NIST AU-3 audit traceability for every agent session
- Async result injection — high-priority mailbox delivery for completed background tasks
- Tiered file access — zero_access / read_only / no_delete defense-in-depth for sensitive files
Government agencies and defense contractors sit on millions of lines of legacy code — COBOL, Fortran, Struts, .NET Framework, Python 2 — with the original developers long gone and zero institutional knowledge left. Hiring is impossible: nobody wants to maintain a 20-year-old Java 6 monolith on WebLogic. The code works, but it's a ticking time bomb of tech debt, unpatched CVEs, and expired ATOs.
ICDEV solves this from both directions:
Build new — scaffold, TDD, lint, scan, and generate code in any of 6 languages with compliance baked in from line one:
| Language | Scaffold | TDD | Lint | SAST | BDD | Code Gen |
|---|---|---|---|---|---|---|
| Python | Flask/FastAPI | pytest | ruff | bandit | behave | yes |
| Java | Spring Boot | JUnit | checkstyle | SpotBugs | Cucumber | yes |
| Go | net/http, Gin | go test | golangci-lint | gosec | godog | yes |
| Rust | Actix-web | cargo test | clippy | cargo-audit | cucumber-rs | yes |
| C# | ASP.NET Core | xUnit | analyzers | SecurityCodeScan | SpecFlow | yes |
| TypeScript | Express | Jest | eslint | eslint-security | cucumber-js | yes |
Modernize legacy — when the original team is gone, ICDEV becomes the team:
- 7R Assessment — automated analysis scores each application across Rehost, Replatform, Refactor, Rearchitect, Rebuild, Replace, and Retire using a weighted multi-criteria decision matrix. No tribal knowledge required — ICDEV reads the code.
- Architecture Extraction — static analysis maps the dependency graph, identifies coupling hotspots, measures complexity, and generates documentation that never existed. Works on codebases with zero comments and zero docs.
- Cross-Language Translation — 5-phase hybrid pipeline translates between any of the 30 language pairs (Extract → Type-Check → Translate → Assemble → Validate+Repair). Migrating a Python 2 Flask app to Go? A legacy Java 8 monolith to modern Spring Boot? A .NET Framework service to ASP.NET Core? ICDEV generates pass@k candidate translations, validates with compiler feedback, and auto-repairs failures — up to 3 repair cycles per unit.
- Strangler Fig Tracking — for large monoliths that can't be rewritten overnight, ICDEV manages the gradual migration: dual-system traceability, feature-by-feature cutover tracking, and a compliance bridge that maintains ≥95% ATO control coverage throughout the entire transition.
- Framework Migration — declarative JSON mapping rules handle Struts → Spring Boot, Django 2 → Django 4, Rails 5 → Rails 7, Express → Fastify, and more. Add new migration paths without writing code.
- ATO Compliance Bridge — this is the killer feature for modernization. Legacy apps often have existing ATOs. ICDEV ensures the modernized application inherits the original control mappings through the crosswalk engine, so you don't lose years of compliance work. The bridge validates coverage every PI and blocks deployment if it drops below 95%.
The bottom line: you don't need the original developers. You don't need a team that knows the legacy stack. ICDEV analyzes the codebase, scores the migration strategy, translates the code, and maintains ATO coverage — with an append-only audit trail documenting every decision for your ISSO.
| Provider | Environment | LLM Integration |
|---|---|---|
| AWS GovCloud | us-gov-west-1 | Amazon Bedrock (Claude, Titan) |
| Azure Government | USGov Virginia | Azure OpenAI |
| GCP | Assured Workloads | Vertex AI (Gemini, Claude) |
| OCI | Government Cloud | OCI GenAI (Cohere, Llama) |
| IBM | Cloud for Government | watsonx.ai (Granite, Llama) |
| Local | Air-Gapped | Ollama (Llama, Mistral, CodeGemma) |
Generated applications connect to 100+ cloud-provider MCP servers automatically based on target CSP.
ICDEV's core architecture separates deterministic tools from probabilistic AI:
┌──────────────────────────────────────────────────────┐
│ Goals → What to achieve (48 workflows) │
│ Orchestration → AI decides tool order (LLM layer) │
│ Tools → Deterministic scripts (500+ tools) │
│ Context → Static reference (42 catalogs) │
│ Hard Prompts → Reusable LLM templates │
│ Args → YAML/JSON config (40+ files) │
└──────────────────────────────────────────────────────┘
Why? LLMs are probabilistic. Business logic must be deterministic. 90% accuracy per step = ~59% over 5 steps. GOTCHA fixes this by keeping AI in the orchestration layer and critical logic in deterministic Python scripts.
Generated child applications inherit the full GOTCHA framework — they aren't wrappers or templates, they're autonomous systems that can build their own features using the same methodology.
┌──────────────────────────────────────────────────────────┐
│ Claude Code / AI IDE │
│ (39 slash commands, 250+ MCP tools) │
├──────────────────────────────────────────────────────────┤
│ Unified MCP Gateway │
│ (single server, all 250+ tools, lazy-loaded) │
├──────────┬──────────┬───────────┬───────────┬────────────┤
│ Core │ Domain │ Domain │ Domain │ Support │
│ │ │ │ │ │
│ Orchestr │ Builder │ MBSE │ DevSecOps │ Knowledge │
│ Architect│ Complnce │ Modernize │ Gateway │ Monitor │
│ │ Security │ Req.Anlst │ │ │
│ │ Infra │ SupplyChn │ │ │
│ │ │ Simulatn │ │ │
├──────────┴──────────┴───────────┴───────────┴────────────┤
│ GOTCHA Framework │
│ Goals │ Tools │ Args │ Context │ Hard Prompts │
├──────────────────────────────────────────────────────────┤
│ SQLite (dev) / PostgreSQL (prod) │ Multi-Cloud CSP │
│ 210 tables, append-only audit │ AWS │Azure│GCP│OCI │
│ Per-tenant DB isolation │ IBM │Local/Air-Gap │
└──────────────────────────────────────────────────────────┘
python tools/dashboard/app.py
# → http://localhost:5000| Page | Purpose |
|---|---|
/ |
Home with auto-notifications and pipeline status |
/projects |
Project listing with compliance posture |
/agents |
Agent registry with heartbeat monitoring |
/monitoring |
System health with status icons |
/wizard |
Getting Started wizard (3 questions → workflow) |
/query |
Natural language compliance queries |
/chat |
Multi-agent chat interface |
/children |
Generated child application registry with health monitoring |
/traces |
Distributed trace explorer with span waterfall |
/provenance |
W3C PROV lineage viewer |
/xai |
Explainable AI dashboard with SHAP analysis |
/ai-transparency |
AI Transparency: model cards, system cards, AI inventory, fairness, GAO readiness |
/ai-accountability |
AI Accountability: oversight plans, CAIO registry, appeals, incidents, ethics reviews, reassessment |
/code-quality |
Code Quality Intelligence: AST metrics, smell detection, maintainability trend, runtime feedback |
/orchestration |
Real-time orchestration: agent grid, workflow DAG, SSE mailbox feed, prompt chains, ATLAS critiques |
/cpmp |
Contract Performance Management: EVM, CPARS prediction, deliverables, subcontractors, portfolio health |
/cpmp/cor |
COR portal: government read-only contract oversight (deliverables, EVM, CPARS) |
/proposals |
GovProposal lifecycle: opportunities, sections, compliance matrix, timeline, reviews |
/govcon |
GovCon Intelligence: SAM.gov scanning, pipeline status, domain distribution |
/govcon/requirements |
Requirement pattern analysis: frequency, domain heatmap, trend detection |
/govcon/capabilities |
ICDEV capability coverage: L/M/N grading, gaps, enhancement recommendations |
Auth: per-user API keys (SHA-256 hashed), 6 RBAC roles (admin, pm, developer, isso, co, cor). Optional BYOK (bring-your-own LLM keys) with AES-256 encryption.
All 250+ tools exposed through a single MCP gateway. Works with any AI coding assistant:
{
"mcpServers": {
"icdev-unified": {
"command": "python",
"args": ["tools/mcp/unified_server.py"]
}
}
}Compatible with: Claude Code, OpenAI Codex, Google Gemini, GitHub Copilot, Cursor, Windsurf, Amazon Q, JetBrains/Junie, Cline, Aider.
Defense-in-depth by default:
- STIG-hardened containers — non-root, read-only rootfs, all capabilities dropped
- Append-only audit trail — no UPDATE/DELETE on audit tables, NIST AU compliant
- CUI markings — applied at generation time per impact level (IL4/IL5/IL6)
- Mutual TLS — all inter-agent communication within K8s
- Prompt injection detection — 5-category scanner for AI-specific threats
- MITRE ATLAS red teaming — adversarial testing against 6 techniques
- Behavioral drift detection — z-score baseline monitoring for all agents
- Tool chain validation — blocks dangerous execution sequences
- MCP RBAC — per-tool, per-role deny-first authorization
- AI transparency — model cards, system cards, AI use case inventory, confabulation detection, fairness assessment per OMB M-25-21/M-26-04, NIST AI 600-1, and GAO-21-519SP
- AI accountability — human oversight plans, CAIO designation, appeal tracking, AI incident response, ethics reviews, reassessment scheduling, cross-framework accountability audit
- Dispatcher mode — Orchestrator agent enforced as delegate-only, cannot execute tools directly
- Tiered file access control — zero_access (
.env,*.pem,*.tfstate), read_only (lock files, catalogs), no_delete (CLAUDE.md, goals, IaC) - Session purpose tracking — NIST AU-3 compliant session intent declaration with SHA-256 integrity hashing
- ATLAS adversarial critique — multi-agent plan review with GO/NOGO/CONDITIONAL consensus before stress-testing
- Self-healing — confidence-based remediation (≥0.7 auto-fix, 0.3–0.7 suggest, <0.3 escalate)
pip install -r requirements.txt
python tools/dashboard/app.pykubectl apply -f k8s/
# Includes: namespace, network policies (default deny), 15 agent deployments,
# dashboard, API gateway, HPA auto-scaling, pod disruption budgetshelm install icdev deploy/helm/ --values deploy/helm/values-on-prem.yaml| Profile | Compliance | Best For |
|---|---|---|
| ISV Startup | None | SaaS products, rapid prototyping |
| DoD Team | FedRAMP + CMMC + FIPS + cATO | Defense software |
| Healthcare | HIPAA + HITRUST + SOC 2 | Health IT / EHR |
| Financial | PCI DSS + SOC 2 + ISO 27001 | FinTech / Banking |
| Law Enforcement | CJIS + FIPS 199/200 | Criminal justice systems |
| GovCloud Full | All 42 frameworks | Maximum compliance |
icdev/
├── goals/ # 47 workflow definitions
├── tools/ # 500+ tools across 44 categories
│ ├── compliance/ # 25+ framework assessors, crosswalk, OSCAL
│ ├── security/ # SAST, AI security, ATLAS, prompt injection
│ ├── builder/ # TDD, scaffolding, app generation, 6 languages
│ ├── requirements/ # RICOAS intake, gap detection, SAFe decomposition
│ ├── simulation/ # Digital Program Twin, Monte Carlo, COA generation
│ ├── dashboard/ # Flask web UI, auth, RBAC, real-time events, orchestration dashboard
│ ├── agent/ # Multi-agent orchestration, DAG workflows, prompt chains, ATLAS critique
│ ├── cloud/ # 6 CSP abstractions, region validation
│ ├── saas/ # Multi-tenant platform layer
│ ├── mcp/ # Unified MCP gateway (250+ tools)
│ ├── modernization/ # 7R assessment, legacy migration
│ ├── observability/ # Tracing, provenance, AgentSHAP, XAI
│ ├── innovation/ # Autonomous self-improvement engine
│ ├── creative/ # Customer-centric feature discovery
│ ├── govcon/ # GovCon Intelligence — SAM.gov capture pipeline
│ └── ... # 30+ more specialized categories
├── args/ # 30+ YAML/JSON configuration files
├── context/ # 42 compliance catalogs, language profiles
├── hardprompts/ # Reusable LLM instruction templates
├── tests/ # 130 test files
├── k8s/ # Production Kubernetes manifests
├── docker/ # STIG-hardened Dockerfiles
├── deploy/helm/ # Helm chart for on-prem deployment
├── .claude/commands/ # 38 Claude Code slash commands
└── CLAUDE.md # Comprehensive architecture documentation
# All tests (130 test files, 1600+ tests)
pytest tests/ -v --tb=short
# BDD scenario tests
behave features/
# E2E browser tests (Playwright)
python tools/testing/e2e_runner.py --run-all
# Production readiness audit (38 checks, 7 categories)
python tools/testing/production_audit.py --human --stream
# Code quality self-analysis
python tools/analysis/code_analyzer.py --project-dir tools/ --jsonMost dependencies use permissive licenses (MIT, BSD, Apache 2.0). Notable exceptions:
| Package | License | Notes |
|---|---|---|
| psycopg2-binary | LGPL | Permits use in proprietary software via dynamic linking (standard pip install) |
| docutils | BSD / GPL / Public Domain | Triple-licensed; used under BSD |
Run pip-licenses -f markdown to audit all dependency licenses.
We welcome contributions. ICDEV uses a Contributor License Agreement (CLA) to support dual licensing. The CLA does not transfer your copyright — you retain full ownership of your work.
See NOTICE for third-party acknowledgments, standards references, and architectural inspirations.
ICDEV is dual-licensed:
-
Open Source — GNU Affero General Public License v3.0 or later Free for internal use, academic research, open-source projects, and evaluation.
-
Commercial — Commercial License Removes AGPL copyleft obligations for SaaS, embedded, or proprietary use.
- Commercial licensing: agi@icdev.ai
- Issues: github.com/icdev-ai/icdev/issues
Built by one developer. Ready for your entire team.