Skip to content

gonnect-uk/hypermind-examples

Repository files navigation

HyperMind Examples

HyperMind Architecture

The Problem: LLMs hallucinate. They generate confident, plausible-sounding answers with no connection to reality. In enterprise contexts—fraud detection, legal research, medical diagnosis—this isn't a quirk. It's a liability.

The Solution: Ground every answer in verifiable facts. Trace every conclusion to its source. Make AI auditable.

🦀 100% Rust-Powered | ⚡ 2.78µs Lookups | 🔒 Cryptographic Proofs | 🌐 WASM + K8s


What is HyperMind?

HyperMind is a reasoning-first AI framework—built entirely in Rust, compiled to WASM—that eliminates hallucinations by construction. Not by prompting. Not by fine-tuning. By fundamentally changing how AI generates answers.

┌───────────────────────────────────────────────────────────────────────────┐
│                           HyperMindAgent                                  │
│   Natural language → SQL with graph_search() CTE → Verified answers       │
├───────────────────────────────────────────────────────────────────────────┤
│                           Runtime Layer                                   │
│            WASM (browser/edge)  |  Kubernetes (enterprise)                │
├───────────────────────────────────────────────────────────────────────────┤
│                       Query & Reasoning Layer                             │
│    SPARQL 1.1  |  Datalog  |  OWL2  |  GraphFrame  |  Motif Detection     │
├───────────────────────────────────────────────────────────────────────────┤
│                               KGDB                                        │
│    Rust-native knowledge graph  |  2.78µs lookups  |  24 bytes/triple     │
└───────────────────────────────────────────────────────────────────────────┘

5 minutes to your first AI agent with deductive reasoning:

git clone https://github.com/gonnect-uk/hypermind-examples.git
cd hypermind-examples
npm install
npm start

No servers. No configuration. Runs entirely in-memory via WASM.


The Four Layers

Layer 1: KGDB — The Foundation

What: A Rust-native knowledge graph database compiled to WebAssembly. Zero-copy semantics. Sub-microsecond performance.

Why: Traditional graph databases are too slow for real-time AI reasoning. KGDB achieves 2.78µs lookup speed—35-180x faster than RDFox—while using only 24 bytes per triple (25% more efficient than competitors).

How: String interning via a concurrent dictionary. SPOC quad indexing for O(1) pattern matching. Worst-case optimal join (WCOJ) execution for complex queries.

const { GraphDB } = require('rust-kgdb')

const db = new GraphDB('http://example.org/')
db.loadTtl(`
  @prefix ex: <http://example.org/> .
  ex:alice ex:knows ex:bob .
  ex:bob ex:knows ex:carol .
`, null)

// 2.78µs per lookup
const results = db.querySelect('SELECT ?person WHERE { ex:alice ex:knows ?person }')

Layer 2: Query & Reasoning — The Brain

What: A complete symbolic reasoning stack—SPARQL 1.1, Datalog rules, OWL2 inference, GraphFrame analytics, and motif detection—unified in a single query interface.

Why: AI needs more than pattern matching. It needs deductive reasoning: the ability to derive new facts from existing ones using formal rules. This is what separates "finding a document" from "proving a conclusion."

How:

Capability What It Does Example
SPARQL 1.1 W3C-standard graph queries SELECT ?x WHERE { ?x :knows :bob }
Datalog Recursive rule evaluation ancestor(X,Z) :- parent(X,Y), ancestor(Y,Z)
OWL2 Semantic inference :workedWith is owl:SymmetricProperty → auto-infer inverse
GraphFrame Network analytics PageRank, connected components, shortest paths
Motif Detection Pattern discovery Find fraud triangles: A→B→C→A
// OWL reasoning: symmetric property auto-inference
db.loadTtl(`
  @prefix owl: <http://www.w3.org/2002/07/owl#> .
  @prefix ex: <http://example.org/> .

  ex:workedWith a owl:SymmetricProperty .
  ex:marshall ex:workedWith ex:carter .
`)

// Query: "Who worked with Carter?"
// Result: marshall (direct) + carter worked with marshall (inferred)

Layer 3: Runtime — The Deployment

What: Two deployment modes from the same codebase—WASM for browser/edge, Kubernetes for enterprise scale.

Why: AI reasoning shouldn't require infrastructure changes. Run the same logic on a mobile device or a 100-node cluster. Same code. Same results. Different scale.

How:

Mode Use Case Latency Scale
WASM Browser, mobile, edge devices <10ms Single user
Kubernetes Enterprise, multi-tenant, federated <50ms 100K+ users
// Same API, different runtime
const agent = new HyperMindAgent({
  name: 'fraud-detector',
  kg: db,
  runtime: 'wasm'      // or 'k8s' for enterprise
})

Layer 4: HyperMindAgent — The Orchestrator

What: The AI layer that transforms natural language questions into verified, traceable answers with cryptographic proofs.

Why: LLMs are good at language. They're terrible at facts. HyperMindAgent uses LLMs for what they're good at (understanding intent, generating queries) while grounding every answer in the knowledge graph. No hallucinations by construction.

How:

  1. Schema extraction — Auto-detect classes, properties, domains from your data
  2. Query generation — LLM generates SQL with graph_search() CTE (universal format)
  3. Execution — Rust executes query via NAPI-RS bindings
  4. Reasoning — Apply OWL/Datalog rules
  5. Proof — Generate SHA-256 hash of derivation chain
const { HyperMindAgent } = require('rust-kgdb')

const agent = new HyperMindAgent({ name: 'legal-analyst', kg: db })

const result = agent.ask('Who argued Brown v. Board of Education?', {
  provider: 'openai',
  apiKey: process.env.OPENAI_API_KEY,
  model: 'gpt-4o'
})

console.log(result.answer)           // "Thurgood Marshall, Robert L. Carter..."
console.log(result.proofHash)        // "sha256:92be3c44..." (verifiable)
console.log(result.reasoning)        // LLM's reasoning for the approach

Generated SQL with graph_search() CTE:

WITH kg AS (
  SELECT * FROM graph_search('
    PREFIX law: <http://law.gov/case#>
    SELECT ?attorney ?name WHERE {
      <http://law.gov/case#BrownVBoard> law:arguedBy ?attorney .
      ?attorney rdfs:label ?name
    }
  ')
)
SELECT * FROM kg

The key insight: The LLM never answers from memory. It generates SQL with graph_search() CTE. Rust executes the query against facts. The facts produce the answer. Every step is traceable.


Answer Formats

HyperMindAgent returns formatted answers (not just "Found X results"):

// ask() - Dynamic Proxy with LLM code generation
const agent = new HyperMindAgent({ name: 'demo', kg: db })
const llmConfig = { provider: 'openai', apiKey: process.env.OPENAI_API_KEY, model: 'gpt-4o' }
const result = agent.ask("Who are the teammates of Lessort?", llmConfig)
console.log(result.answer)
// → "Cedi Osman, Jerian Grant, Lorenzo Brown, Kendrick Nunn, Kostas Sloukas and 106 more"

// askAgentic() - Multi-turn tool calling for complex analysis
const agenticResult = agent.askAgentic("Analyze property values across neighborhoods", llmConfig)
// → ┌────────────────────────────────────────┐
//   │ Results (111 total)                     │
//   ├────────────────────────────────────────┤
//   │  Cedi Osman                            │
//   │  Jerian Grant                          │
//   │  ...                                   │
//   └────────────────────────────────────────┘

// JSON format - Structured data
const agent = new HyperMindAgent({ name: 'demo', kg: db, answerFormat: 'json' })
// → { "count": 111, "results": [...], "reasoning": {...} }

Works with or without API key. See HyperMindAgent API for details.


Examples

Example Description Command
Self-Driving Car Explainable AI for autonomous vehicles npm run self-driving-car
Digital Twin Smart Building IoT with HVAC automation npm run digital-twin
Music Recommendation Semantic music discovery with artist influence npm run music
BRAIN Fraud + Underwriting + HyperFederate npm run brain
Euroleague Basketball KG + OWL + RDF2Vec npm run euroleague
Boston Real estate + property valuation npm run boston
Legal US case law + mentorship chains npm run legal
Fraud Circular payment detection npm run fraud
Federation KGDB + Snowflake + BigQuery npm run federation
GraphFrames PageRank, shortest paths npm run graphframes
Datalog Rule-based reasoning npm run datalog
Pregel Bulk parallel processing npm run pregel

Detailed output:


Benchmarks

Demo Pass Rates (verified January 2026)

Demo Pass Rate Tests
Music Recommendation 100% 14/14
Digital Twin 100% 12/12
Boston Real Estate 100% 19/19
Euroleague Basketball 100% 18/18
US Legal Case 100% 21/21
TOTAL 100% 84/84

SQL with graph_search() CTE Generation

Metric HyperMind (with schema) Vanilla GPT-4 (no schema)
Valid SQL with CTE 100% 0% (markdown blocks)
Semantic Accuracy 100% 0%

Key Points:

  • 100% Valid SQL: HyperMind always produces executable SQL with graph_search() CTE
  • 100% Semantic Accuracy: All queries return correct results from knowledge graph
  • Vanilla GPT-4 without schema context fails completely (returns markdown blocks)

Example Output (from Digital Twin demo):

WITH kg AS (
  SELECT * FROM graph_search('
    PREFIX iot: <http://smartbuilding.org/iot#>
    SELECT ?property ?value ?classification WHERE {
      ?serverRoom a iot:ServerRoom .
      ?serverRoom ?property ?value .
      OPTIONAL { ?serverRoom rdf:type ?classification }
    }
  ')
)
SELECT * FROM kg

Run yourself:

OPENAI_API_KEY=your-key npm run bench:hypermind

Documentation


Enterprise / K8s

For production Kubernetes deployments:

Contact: gonnect.hypermind@gmail.com


Requirements

  • Node.js 14+

License

Apache 2.0

About

hypermind-examples

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •