Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/README.skills.md
Original file line number Diff line number Diff line change
Expand Up @@ -225,6 +225,7 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-skills) for guidelines on how to
| [microsoft-skill-creator](../skills/microsoft-skill-creator/SKILL.md)<br />`gh skills install github/awesome-copilot microsoft-skill-creator` | Create agent skills for Microsoft technologies using Learn MCP tools. Use when users want to create a skill that teaches agents about any Microsoft technology, library, framework, or service (Azure, .NET, M365, VS Code, Bicep, etc.). Investigates topics deeply, then generates a hybrid skill storing essential knowledge locally while enabling dynamic deeper investigation. | `references/skill-templates.md` |
| [migrating-oracle-to-postgres-stored-procedures](../skills/migrating-oracle-to-postgres-stored-procedures/SKILL.md)<br />`gh skills install github/awesome-copilot migrating-oracle-to-postgres-stored-procedures` | Migrates Oracle PL/SQL stored procedures to PostgreSQL PL/pgSQL. Translates Oracle-specific syntax, preserves method signatures and type-anchored parameters, leverages orafce where appropriate, and applies COLLATE "C" for Oracle-compatible text sorting. Use when converting Oracle stored procedures or functions to PostgreSQL equivalents during a database migration. | None |
| [minecraft-plugin-development](../skills/minecraft-plugin-development/SKILL.md)<br />`gh skills install github/awesome-copilot minecraft-plugin-development` | Use this skill when building or modifying Minecraft server plugins for Paper, Spigot, or Bukkit, including plugin.yml setup, commands, listeners, schedulers, player state, team or arena systems, persistent progression, economy or profile data, configuration files, Adventure text, and version-safe API usage. Trigger for requests like "build a Minecraft plugin", "add a Paper command", "fix a Bukkit listener", "create plugin.yml", "implement a minigame mechanic", "add a perk or quest system", or "debug server plugin behavior". | `references/bootstrap-registration.md`<br />`references/build-test-and-runtime-validation.md`<br />`references/config-data-and-async.md`<br />`references/maps-heroes-and-feature-modules.md`<br />`references/minigame-instance-flow.md`<br />`references/persistent-progression-and-events.md`<br />`references/project-patterns.md`<br />`references/state-sessions-and-phases.md` |
| [mini-context-graph](../skills/mini-context-graph/SKILL.md)<br />`gh skills install github/awesome-copilot mini-context-graph` | A persistent, compounding knowledge base combining Karpathy's LLM Wiki pattern<br />with a structured knowledge graph. Ingest documents once — the LLM writes wiki<br />pages, extracts entities/relations into the graph, and stores raw content for<br />evidence retrieval. Knowledge accumulates and cross-references; it is never<br />re-derived from scratch. | `references/ingestion.md`<br />`references/lint.md`<br />`references/ontology.md`<br />`references/retrieval.md`<br />`scripts/config.py`<br />`scripts/contextgraph.py`<br />`scripts/template_agent_workflow.py`<br />`scripts/tools` |
| [mkdocs-translations](../skills/mkdocs-translations/SKILL.md)<br />`gh skills install github/awesome-copilot mkdocs-translations` | Generate a language translation for a mkdocs documentation stack. | None |
| [model-recommendation](../skills/model-recommendation/SKILL.md)<br />`gh skills install github/awesome-copilot model-recommendation` | Analyze chatmode or prompt files and recommend optimal AI models based on task complexity, required capabilities, and cost-efficiency | None |
| [msstore-cli](../skills/msstore-cli/SKILL.md)<br />`gh skills install github/awesome-copilot msstore-cli` | Microsoft Store Developer CLI (msstore) for publishing Windows applications to the Microsoft Store. Use when asked to configure Store credentials, list Store apps, check submission status, publish submissions, manage package flights, set up CI/CD for Store publishing, or integrate with Partner Center. Supports Windows App SDK/WinUI, UWP, .NET MAUI, Flutter, Electron, React Native, and PWA applications. | None |
Expand Down
194 changes: 194 additions & 0 deletions skills/mini-context-graph/SKILL.md
Comment thread
aaronpowell marked this conversation as resolved.
Original file line number Diff line number Diff line change
@@ -0,0 +1,194 @@
---
name: mini-context-graph
description: |
A persistent, compounding knowledge base combining Karpathy's LLM Wiki pattern
with a structured knowledge graph. Ingest documents once — the LLM writes wiki
pages, extracts entities/relations into the graph, and stores raw content for
evidence retrieval. Knowledge accumulates and cross-references; it is never
re-derived from scratch.
---

# Mini Context Graph Skill

## The Core Idea

Standard RAG re-discovers knowledge from scratch on every query. This skill is different:

1. **Wiki layer** — The LLM writes and maintains persistent markdown pages (summaries, entity pages, topic syntheses). Cross-references are already there. The wiki gets richer with every ingest.
2. **Graph layer** — Entities and relations are extracted once and stored as a navigable knowledge graph. BFS traversal answers structural queries without re-reading sources.
3. **Raw source layer** — Original documents are stored immutably with chunks. Provenance links tie every graph node and edge back to the exact text that supports it.

> The LLM writes; the Python tools handle all bookkeeping.

---

## Three Layers

| Layer | Where | What the LLM does | What Python does |
|-------|-------|-------------------|-----------------|
| **Raw Sources** | `data/documents.json` | Reads (never modifies) | Stores chunks + metadata |
| **Wiki** | `wiki/` (markdown) | Writes/updates pages | Manages index.md + log.md |
| **Graph** | `data/graph.json` | Extracts entities + relations | Persists, deduplicates, traverses |

---

## ⚡ Quick Start for Agents

```python
from scripts.contextgraph import ContextGraphSkill
from scripts.tools import wiki_store

skill = ContextGraphSkill()

# ===== INGEST WITH FULL RAG + WIKI =====
# 1. Read references/ingestion.md and references/ontology.md first
# 2. Extract entities and relations (LLM reasoning step)
entities = [
{"name": "memory leak", "type": "issue", "supporting_text": "memory leaks cause crashes"},
{"name": "system crash", "type": "issue", "supporting_text": "system crashes due to memory leaks"},
]
relations = [
{"source": "memory leak", "target": "system crash", "type": "causes",
"confidence": 1.0, "supporting_text": "System crashes due to memory leaks."},
]

result = skill.ingest_with_content(
doc_id="doc_001",
title="System Crash Analysis",
source="/docs/incident_report.pdf",
raw_content="System crashes due to memory leaks. Memory leaks occur when objects are not released.",
entities=entities,
relations=relations,
)
# result = {"doc_id": "doc_001", "chunk_count": 1, "nodes_added": 2, "edges_added": 1}

# 3. Write a wiki summary page for this document
wiki_store.write_page(
category="summary",
title="System Crash Analysis Summary",
content="""---
title: System Crash Analysis
source_document: doc_001
tags: [summary, incident]
---

# System Crash Analysis

**Source:** incident_report.pdf

## Key Claims

- [[memory-leak]] causes [[system-crash]] (confidence: 1.0)

## Entities

- [[memory-leak]] (issue)
- [[system-crash]] (issue)
""",
summary="Incident report: memory leaks cause system crashes.",
)

# ===== QUERY WITH EVIDENCE =====
result = skill.query_with_evidence("Why does the system crash?")
# Returns: {"query": ..., "subgraph": ..., "supporting_documents": [...], "evidence_chain": ...}

# ===== WIKI SEARCH (read wiki before answering) =====
pages = wiki_store.search_wiki("memory leak")
# Returns: [{slug, category, path, snippet}, ...]
```

---

## Operations

### Ingest

When a user provides a new document:

1. Read `references/ingestion.md` — entity/relation extraction rules.
2. Read `references/ontology.md` — type normalization rules.
3. Extract entities and relations using your LLM reasoning.
4. Call `skill.ingest_with_content(...)` — stores raw content + chunks + graph nodes + provenance.
5. **Write a wiki summary page** using `wiki_store.write_page(category="summary", ...)`.
6. **Update entity pages** — for each new/updated entity, write or update `wiki_store.write_page(category="entity", ...)`.
7. **Update topic pages** if the document touches an existing synthesis topic.
8. A single document ingest will typically touch 3–10 wiki pages.

### Query

When a user asks a question:

1. **Check the wiki first** — `wiki_store.search_wiki(query)` to find relevant pages. Read them.
2. If the wiki has a good answer, synthesize from wiki pages (fast path).
3. If deeper graph traversal is needed, call `skill.query_with_evidence(query)`.
4. Return the answer with evidence citations from `supporting_documents`.
5. If the answer is valuable, file it back as a new wiki topic page.

### Lint

Periodically health-check the wiki:

```python
from scripts.tools import wiki_store
issues = wiki_store.lint_wiki()
# Returns: {orphan_pages, missing_pages, broken_wikilinks, isolated_pages}
```

Ask the LLM to review and fix: broken links, orphan pages, stale claims, missing cross-references. See `references/lint.md` for full lint workflow.

---

## Ingestion Constraints

- ❌ Do NOT hallucinate entities not present in the text
- ❌ Do NOT add relations without explicit textual evidence
- ❌ Do NOT add edges with confidence < 0.6
- ✅ Provide `supporting_text` for every entity and relation — this enables provenance
- ✅ Write a wiki summary page for every ingested document
- ✅ Update existing entity pages when new information arrives
- ✅ Flag contradictions in wiki pages when new data conflicts with old claims

---

## Retrieval Constraints

- 🔒 Traversal depth MUST NOT exceed 2 (config: MAX_GRAPH_DEPTH)
- 🔒 Only edges with confidence ≥ 0.6 (config: MIN_CONFIDENCE)
- 🔒 Maximum 50 nodes returned (config: MAX_NODES)
- ❌ Do NOT fabricate nodes or edges not in the graph

---

## Full Python API Reference

| Method | Purpose | When to Use |
|--------|---------|-------------|
| `skill.ingest_with_content(doc_id, title, source, raw_content, entities, relations)` | Full RAG ingest: raw docs + graph + provenance | Every new document |
| `skill.add_node(name, node_type)` | Add single entity (no provenance) | Quick additions without a source doc |
| `skill.add_edge(source_name, target_name, relation, confidence)` | Add single relation | Quick additions without a source doc |
| `skill.query(query)` | Graph-only retrieval → subgraph | Structural queries |
| `skill.query_with_evidence(query)` | Graph + provenance → subgraph + source chunks | Queries requiring citations |
| `wiki_store.write_page(category, title, content, summary)` | Write/update a wiki page | After every ingest; after answering queries |
| `wiki_store.read_page(category, title)` | Read a wiki page | Before answering; for cross-referencing |
| `wiki_store.search_wiki(query)` | Keyword search across wiki | Fast path before graph traversal |
| `wiki_store.list_pages(category)` | List all wiki pages | Getting an overview |
| `wiki_store.get_log(last_n)` | Read recent operations | Understanding wiki history |
| `wiki_store.lint_wiki()` | Health check | Periodic maintenance |
| `documents_store.list_documents()` | List all ingested raw sources | Audit / provenance checking |
| `documents_store.search_chunks(query)` | Chunk-level search | Finding specific evidence |

---

## Design Philosophy

> "The wiki is a persistent, compounding artifact. The cross-references are already there. The synthesis already reflects everything you've read." — Karpathy

| Layer | What Happens | Who Owns It |
|-------|-----------|-------------|
| **LLM Reasoning** | Extraction, synthesis, writing wiki pages | Agent (.md guidance files) |
| **Wiki Persistence** | Index, log, file I/O | `wiki_store.py` |
| **Graph Persistence** | Dedup, index, BFS traverse | `graph_store.py`, `retrieval_engine.py` |
| **Raw Source Storage** | Immutable docs + chunks + provenance | `documents_store.py` |

The human curates sources and asks questions. The LLM writes the wiki, extracts the graph, and answers with citations. Python handles all bookkeeping.

196 changes: 196 additions & 0 deletions skills/mini-context-graph/references/ingestion.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,196 @@
# Ingestion Instructions

This file defines how the agent extracts entities and relations from a raw document.

---

## Step 1: Read the Document

Read the provided text carefully. Identify:
- **Entities**: noun phrases that refer to real-world objects, systems, components, actors, concepts, or events.
- **Relations**: verb phrases that describe how one entity affects, contains, causes, uses, or is related to another.

---

## Step 2: Extract Entities

For each entity:
- Record its **name** (normalized: lowercase, strip leading/trailing whitespace)
- Assign a **type**: a short label (1–3 words) that categorizes the entity

### Entity Type Examples

| Entity Name | Suggested Type |
|-------------|---------------|
| Python interpreter | software |
| memory leak | issue |
| operating system | system |
| database | infrastructure |
| user | actor |
| API endpoint | interface |
| server | infrastructure |

**Rules:**
- Types must be general enough to reuse across documents
- Do NOT create unique types per entity (e.g., avoid `python-interpreter-type`)
- Use `ontology.md` normalization rules to canonicalize types

---

## Step 3: Extract Relations

For each pair of entities with an explicit connection in the text:
- Record the **source** entity name
- Record the **target** entity name
- Record the **relation type**: a verb or verb phrase (normalized: lowercase)
- Assign a **confidence** score between 0 and 1:
- 1.0 = stated explicitly ("A causes B")
- 0.8 = strongly implied ("A is linked to B")
- 0.6 = weakly implied ("A may affect B")
- < 0.6 = do NOT include

---

## Step 4: Output Format

Produce a JSON object in this exact format:

```json
{
"entities": [
{ "name": "entity name", "type": "entity type", "supporting_text": "exact quote mentioning this entity" }
],
"relations": [
{
"source": "source entity name",
"target": "target entity name",
"type": "relation type",
"confidence": 0.9,
"supporting_text": "exact quote that justifies this relation"
}
]
}
```

The `supporting_text` field is **required for provenance**. It must be a verbatim or near-verbatim quote from the document that mentions or supports the entity/relation. This is what links graph nodes and edges back to their source.

---

## Rules

- All names and types must be **lowercase**
- Only include relations where **both entities** are present in the entities list
- Do NOT invent entities or relations not supported by the text
- Prefer **reusing existing entity and relation types** from the ontology over creating new ones
- One entity can appear in multiple relations (as source or target)
- Always include `supporting_text` — this enables evidence retrieval and audit trails

---

## Step 5: Write Wiki Pages (Required)

After calling `skill.ingest_with_content(...)`, you MUST write wiki pages:

### 5a. Write a summary page for the document

```python
from scripts.tools import wiki_store

wiki_store.write_page(
category="summary",
title=f"{title} Summary",
content=f"""---
title: {title}
source_document: {doc_id}
tags: [summary]
---

# {title}

**Source:** {source}

## Key Claims

{chr(10).join(f'- [[{r["source"].replace(" ", "-")}]] {r["type"]} [[{r["target"].replace(" ", "-")}]] (confidence: {r["confidence"]})' for r in relations)}

## Entities

{chr(10).join(f'- [[{e["name"].replace(" ", "-")}]] ({e["type"]})' for e in entities)}

## Open Questions

- (Add questions from reading the document here)
""",
summary=f"Summary of {title}",
)
```

### 5b. Write or update entity pages

For each **new** entity not already in the wiki, write an entity page:

```python
wiki_store.write_page(
category="entity",
title=entity_name,
content=f"""---
title: {entity_name}
type: {entity_type}
source_document: {doc_id}
tags: [{entity_type}]
---

# {entity_name}

(Description from the document or prior knowledge.)

## Relations

(List any wikilinks to related entities extracted from relations.)

## Mentioned in

- [[{doc_id}-summary]]
""",
summary=f"{entity_name}: {entity_type}",
)
```

For **existing** entity pages, read the current page and append new information, updated relations, or flag contradictions.

---

## Example

**Input document:**
```
System crashes due to memory leaks.
Memory leaks occur when objects are not released.
```

**Expected extraction output:**
```json
{
"entities": [
{ "name": "system crash", "type": "issue", "supporting_text": "system crashes due to memory leaks" },
{ "name": "memory leak", "type": "issue", "supporting_text": "memory leaks occur when objects are not released" },
{ "name": "object", "type": "component", "supporting_text": "objects are not released" }
],
"relations": [
{
"source": "memory leak",
"target": "system crash",
"type": "causes",
"confidence": 1.0,
"supporting_text": "System crashes due to memory leaks."
},
{
"source": "object",
"target": "memory leak",
"type": "contributes to",
"confidence": 0.9,
"supporting_text": "Memory leaks occur when objects are not released."
}
]
}
```
Loading
Loading