Skip to content

feat: Dhee V2.0 — Self-Evolving Cognition Plugin#11

Merged
Ashish-dwi99 merged 1 commit intomainfrom
alpha
Mar 30, 2026
Merged

feat: Dhee V2.0 — Self-Evolving Cognition Plugin#11
Ashish-dwi99 merged 1 commit intomainfrom
alpha

Conversation

@Ashish-dwi99
Copy link
Copy Markdown
Collaborator

Transforms Dhee from a memory layer into a self-improving cognition plugin for any agent framework (MCP, OpenAI, LangChain, AutoGen), including edge/humanoid hardware.

Phase 1 — Universal Plugin:

  • DheePlugin: framework-agnostic API (remember/recall/context/checkpoint)
  • DheeEdge: offline hardware plugin (GGUF+ONNX, <500MB)
  • BuddhiMini + TraceSegmenter: trainable model scaffold with 3 task heads

Phase 2 — Self-Evolving Cognition:

  • ContrastiveStore: success/failure pairs with MaTTS re-ranking
  • HeuristicDistiller: abstract reasoning patterns at 3 abstraction levels
  • MetaBuddhi: self-referential strategy mutation and promotion loop
  • ProgressiveTrainer: 3-stage SFT→DPO→RL training pipeline
  • HyperContext now includes contrasts and heuristics

Phase 3 — Scale:

  • EvolvingGraph: entity versioning, personalized PageRank
  • HiveMemory: multi-agent shared cognition over engram-bus
  • CRDT sync: LWW-Register, G-Counter, OR-Set for offline/edge
  • Framework adapters: OpenAI, LangChain, AutoGen, system prompt
  • EdgeTrainer: on-device LoRA micro-training (CPU, rank-4)

Research basis: DGM-Hyperagents, ERL, ReasoningBank, AgeMem, Structured Agent Distillation.

Transforms Dhee from a memory layer into a self-improving cognition
plugin for any agent framework (MCP, OpenAI, LangChain, AutoGen),
including edge/humanoid hardware.

Phase 1 — Universal Plugin:
- DheePlugin: framework-agnostic API (remember/recall/context/checkpoint)
- DheeEdge: offline hardware plugin (GGUF+ONNX, <500MB)
- BuddhiMini + TraceSegmenter: trainable model scaffold with 3 task heads

Phase 2 — Self-Evolving Cognition:
- ContrastiveStore: success/failure pairs with MaTTS re-ranking
- HeuristicDistiller: abstract reasoning patterns at 3 abstraction levels
- MetaBuddhi: self-referential strategy mutation and promotion loop
- ProgressiveTrainer: 3-stage SFT→DPO→RL training pipeline
- HyperContext now includes contrasts and heuristics

Phase 3 — Scale:
- EvolvingGraph: entity versioning, personalized PageRank
- HiveMemory: multi-agent shared cognition over engram-bus
- CRDT sync: LWW-Register, G-Counter, OR-Set for offline/edge
- Framework adapters: OpenAI, LangChain, AutoGen, system prompt
- EdgeTrainer: on-device LoRA micro-training (CPU, rank-4)

Research basis: DGM-Hyperagents, ERL, ReasoningBank, AgeMem,
Structured Agent Distillation.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@Ashish-dwi99 Ashish-dwi99 merged commit 8a1f0e8 into main Mar 30, 2026
0 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant