Agent-relative novelty computation using structured reference frames and knowledge graphs.
Novelty is not an intrinsic property of information. It emerges from the relationship between new concepts and an agent's existing knowledge structure. This system measures that relationship by:
- Representing knowledge as hierarchical claim trees with weighted stakes
- Using Wikidata as a grounded knowledge graph for traversal
- Running a fetch/parse loop that terminates when integration, contradiction, or orthogonality is detected
- Computing novelty from four orthogonal dimensions derived from how the loop terminates
from wikidata_probe import measure_novelty
# Measure novelty of "blockchain" against a classical economics frame
result = measure_novelty(
concept="blockchain",
reference_concepts=["money", "bank", "currency", "transaction", "ledger"]
)
print(f"Termination: {result.termination.value}")
print(f"Composite novelty: {result.composite:.3f}")| Document | Description |
|---|---|
| Theory | Theoretical foundation and definitions |
| Formalization | Mathematical specification with axioms |
| Architecture | System components and data flow |
| Wikidata Integration | Knowledge graph specifics |
| Attention | Attention-guided traversal and novelty-modulated allocation |
novelty/
├── core.py # Abstract interfaces (NoveltyProbe, ReferenceFrame)
├── wikidata_probe.py # Wikidata-backed implementation
├── wikidata.py # Wikidata API queries
├── embeddings.py # Sentence embeddings and NLI
├── world_model/ # Belief structure components
│ ├── tree.py # Binary trees with PRO/CON positioning
│ ├── agent.py # Tendencies and stake allocations
│ └── attention.py # Novelty-modulated attention
├── docs/ # Documentation
└── tests/ # Test suite
Claims this system makes that can be tested:
- Reference dependence: The same concept yields different novelty scores against different frames
- Absorption reduces novelty: After integrating a concept, its novelty against the updated frame is lower
- Depth matters: Contradicting foundational claims produces higher novelty than contradicting derived claims
- Stake matters: Affecting high-stake claims produces higher novelty
- Attention capture: High novelty shifts allocation toward CURIOSITY tendency
- Python 3.10+
- sentence-transformers (for embeddings)
- transformers (for NLI)
- requests (for Wikidata API)
- Wikidata coverage varies by domain
- NLI-based stance detection has ~200ms latency per inference
- No learning/adaptation of the reference frame during measurement
- Composite score uses fixed geometric mean weighting
MIT