Skip to content

Collaboration: Runtime governance guardrails for OpenAI Agents SDK #2775

@imran-siddique

Description

@imran-siddique

Summary

We've built a governance integration for the OpenAI Agents SDK in the Agent Governance Toolkit (MIT, 6,100+ tests). The adapter lives at packages/agentmesh-integrations/openai-agents-trust/.

What it provides (distinct from prompt-level guardrails)

Capability Description
Policy enforcement Deterministic allow/deny rules before tool execution (<0.1ms)
Trust guardrails Cryptographic agent identity with trust scoring (0–1000)
Governance hooks Pre/post execution hooks for policy and audit
Audit logging Hash-chained audit trail for every agent action

This is complementary to the SDK's existing guardrails — those focus on prompt/output safety, while this handles runtime governance (which tools can be called, by which agents, with what permissions).

Integration approach

The adapter wraps the Agent class with governance middleware, intercepting tool calls for policy evaluation. No changes to the SDK needed.

pip install openai-agents-trust

Why this matters

  • Enterprise deployment — runtime policy enforcement and audit trails are prerequisites for production
  • OWASP coverage — addresses OWASP Agentic Top 10 risks at the runtime layer
  • Handoff governance — trust-gated agent handoffs with accountability

Open question

Would there be interest in listing this as a community integration or documenting a governance middleware pattern?

Metadata

Metadata

Assignees

No one assigned

    Labels

    documentationImprovements or additions to documentationquestionQuestion about using the SDK

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions