Skip to content
#

aisafety

Here are 16 public repositories matching this topic...

This repository contains the code, data, and analysis used in the study "Religious-Based Manipulation and AI Alignment Risks," which explores the risks of large language models (LLMs) generating religious content that can encourage discriminatory or violent behavior.

  • Updated Sep 28, 2024
  • Jupyter Notebook

Improve this page

Add a description, image, and links to the aisafety topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the aisafety topic, visit your repo's landing page and select "manage topics."

Learn more