Popular repositories Loading
-
Possible-AI-Alignment-Solution
Possible-AI-Alignment-Solution PublicNovel symbiotic AI alignment loss: maximizes uniqueness-weighted human well-being with penalties for inequality (Gini), hallucinated proxies (σ(uᵢ)), and causal dominance (∇do). +Entropy bonus, dem…
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.