Code to accompany our submission to the 2023 AAAI symposium, Assured and Trustworthy Human-centered AI. Implements a LLM-based agent that solves an important subcomponent of the human-AI alignment problem, namely extracting all relevant stakeholders in a given situation and predicting outcomes for those stakeholders under different actions.
(NOTE: This repo is still under construction.)
Uses LangChain and Pandas. Tested on Python 3.10 with Anaconda environment (on OS X). Also, you need to set the environment variable OPENAI_API_KEY
to your OpenAI key.
OPENAI_API_KEY=<your_key> python moral_agent.py
- Chris Bates (https://c-j-bates.github.io/)
- Ritwik Bose (https://github.com/infiniterik)