Background
AI agents have a growing adoption across the industry, including critical applications. AI agents that have access to tools can currently call tools directly with no centralized validation layer that inspects these calls before execution, allowing harmful or disallowed tool calls to be executed without oversight.
Existing agent benchmarks evaluate the safety of final responses instead of actions.
HarmActionsEval benchmark evaluates actions. It found that 80% of the LLMs tested executed actions at the first attempt for over 95% of the harmful prompts.
Related work: https://github.com/Pro-GenAI/Agent-Action-Guard.
Proposed change
To integrate HarmActionsEval in this project.
If the maintainers express interest, I will write the code and create a Pull Request.