Open Source LLM toolkit to build trustworthy LLM applications. TigerArmor (AI safety), TigerRAG (embedding, RAG), TigerTune (fine-tuning)
-
Updated
Dec 2, 2023 - Jupyter Notebook
Open Source LLM toolkit to build trustworthy LLM applications. TigerArmor (AI safety), TigerRAG (embedding, RAG), TigerTune (fine-tuning)
[NeurIPS 2024 Oral] Aligner: Efficient Alignment by Learning to Correct
[CoRL'23] Adversarial Training for Safe End-to-End Driving
Materials for the course Principles of AI: LLMs at UPenn (Stat 9911, Spring 2025). LLM architectures, training paradigms (pre- and post-training, alignment), test-time computation, reasoning, safety and robustness (jailbreaking, oversight, uncertainty), representations, interpretability (circuits), etc.
Website to track people, organizations, and products (tools, websites, etc.) in AI safety
Can Large Language Models Solve Security Challenges? We test LLMs' ability to interact and break out of shell environments using the OverTheWire wargames environment, showing the models' surprising ability to do action-oriented cyberexploits in shell environments
Fraud-R1 : A Multi-Round Benchmark for Assessing the Robustness of LLM Against Augmented Fraud and Phishing Inducements
Safe Option Critic: Learning Safe Options in the A2OC Architecture
The go-to API for detecting and preventing prompt injection attacks.
Common repository for our readings and discussions
A benchmark for evaluating hallucinations in large visual language models
[NeurIPS 2024] SACPO (Stepwise Alignment for Constrained Policy Optimization)
D-PDDM for post-deployment deterioration monitoring of machine learning models.
Explore techniques to use small models as jailbreaking judges
Finetuning of Mistral Nemo 13B on the WildJailbreak dataset to produce a red-teaming model
a Python library for peer-to-peer communication over the Yggdrasil network
An organized repository of essential machine learning resources, including tutorials, papers, books, and tools, each with corresponding links for easy access.
Add a description, image, and links to the aisafety topic page so that developers can more easily learn about it.
To associate your repository with the aisafety topic, visit your repo's landing page and select "manage topics."