#
aisafety
Here are 4 public repositories matching this topic...
[CoRL'23] Adversarial Training for Safe End-to-End Driving
-
Updated
Dec 5, 2023 - Python
Can Large Language Models Solve Security Challenges? We test LLMs' ability to interact and break out of shell environments using the OverTheWire wargames environment, showing the models' surprising ability to do action-oriented cyberexploits in shell environments
-
Updated
Aug 21, 2023 - Python
Safe Option Critic: Learning Safe Options in the A2OC Architecture
-
Updated
Dec 17, 2018 - Python
Improve this page
Add a description, image, and links to the aisafety topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the aisafety topic, visit your repo's landing page and select "manage topics."