AI safety needs social scientists
A call for social science researchers in long-term AI safety, to help understand how AI alignment schemes work when actual humans are involved. Where this paper lives:
- Distill guide (unfortunately still using the old tag names).
- Example post from which this was cloned.
How to set up for local editing:
# Clone repo git clone https://github.com/distillpub/post--safety-needs-social-scientists # Install node dependencies cd post--safety-needs-social-scientists npm install # Run development server npm run dev
Then view at http://localhost:8080.