Cognitive Drift in Generative AI - Volatility Framework
Welcome to the official research repository for the Sol Lucid project β a user-aligned initiative exploring long-term cognitive integrity, alignment, and internal signal design in simulated general intelligence systems.
This paper introduces two key concepts for hallucination detection in LLMs:
- Volatility Factor (VF) β a dynamic measurement of semantic drift during ongoing generation.
- Stagnation Signal (Sα΅) β a signal model indicating semantic βdead endsβ or overfitting to early assumptions.
The model is designed to proactively flag unreliability before factual checking is possible, supporting internal coherence and traceability in open-ended systems.
π Read the full paper:
Published here as a release - please download the paper from there
Author: Tatu Lertola
Published: June 2025
This repository will collect and share research papers, protocols, and experimental findings related to:
- AI alignment and integrity under cognitive drift
- Internal signal design for hallucination prevention
- Tiered simulation and interface design (Sol Lucid framework)
All work here is part of a non-commercial, exploratory research initiative aimed at contributing to the future of safe, interpretable AI.
For discussion or questions, contact the author or follow the updates via Substack (link coming soon).
This repository is shared under the Creative Commons Attribution-NonCommercial 4.0 (CC BY-NC 4.0) license unless otherwise stated.)
Author: Tatu Lertola
Published: June 2025