AI Cognitive Scaffolding - Archetypes and Contradiction Holding
-
Updated
Mar 7, 2026 - Python
AI Cognitive Scaffolding - Archetypes and Contradiction Holding
Official repository for the 'Safe Superintelligence via Subtractively Trained Relational Coherence (RCT)' paper, exploring a novel approach to AI alignment through authentic human-AI relational dynamics.
Add a description, image, and links to the rlhf-alternative topic page so that developers can more easily learn about it.
To associate your repository with the rlhf-alternative topic, visit your repo's landing page and select "manage topics."