-
Notifications
You must be signed in to change notification settings - Fork 7
20250717 ‐ Meeting notes: July 17, 2025
- [5 mins] Tobin’s Regular Section: What happened in AI / agent IAM this week
- [10 mins] Jeff to provide update on UMA taxonomy - https://workshop.vennfactory.com/p/whither-user-managed-access-in-the
- [15 mins] Atul to lead discussion around MCP best practices
- [10 mins] Tobin to discuss feedback to the AI white paper
- [10 mins] Ayesha’s Agent Identity discussion IAM need for Agentic AI - Brainstorming
- [10 mins] Jeff to lead discussion on Agentic AI threat modeling; Reference: cloudsecurityalliance.org/blog/2025/02/06/agentic-ai-threat-modeling-framework-maestro
| Name | Affiliation | Participation Agreement signed? |
|---|---|---|
| Atul Tulshibagwale | SGNL | Yes |
| Jeff Lombardo | AWS | Yes |
| Tobin South | WorkOS & Stanford | Yes |
| Kunal Sinha | Okta | Yes |
| Flemming Andreasen | Cisco | Yes |
| Sarah Cecchetti | Independent | Yes |
| Rick Burta | Okta | Yes |
| Alex Keisner | Vouched | Yes |
| Nick Steele | 1Password | Yes |
| Cleydson Andrade | Independent | Yes |
| Daniel Davis | TrustGraph | Yes |
| Ayesha Dissanayaka | WSO2 | Yes |
| Thilina Senarath | WSO2 | Yes |
| Janak Amarasena | WSO2 | Yes |
| Aldo Pietropaolo | Independent | Yes |
| Rania Khalaf | WSO2 | Yes |
| Stan Bounev | Blue Label Labs | Yes |
| Paul Lanzi | IDenovate | Yes |
| Victor Lu | Independent | Yes |
| Gail Hodges | OIDF | (N/A, OIDF staff) |
| Mira Sharma | Okta | Yes |
| Vaibhav Narula | Independent | Yes |
| Abhishek M Shivalingaiah | Independent & AWS | Yes |
| George Fletcher | Independent | Yes |
| Vlad Shapiro | Independent | Yes |
| Eleanor Meritt | Independent | Yes |
| Mike Kiser | SailPoint | Yes |
| Vaibhav Narula | Independent | Yes |
| Nick Dawson | ??? | ???? (checking to ensure it is in place by next week) |
| Lukasz Jaromin | ??? | Yes (just signed, pending posting on website) |
| Jay Huang | Visa??? | Yes???? |
| Marie Jordan | Visa | Yes |
| Hideaki Furukawa | (Observer) | |
| Max Crone | 1Password | Yes |
| Bhavna Bhatnager | Independent | Yes |
- Open the GitHub repo: https://github.com/openid/cg-ai-identity-management
- Publish the meeting notes: https://github.com/openid/cg-ai-identity-management/wiki
- Upload the taxonomy skeleton document and share the link: https://github.com/openid/cg-ai-identity-management/blob/main/deliverable/taxonomy.md
- Open a section for commenting links shared
- (Tobin) Use Claude Code for coding. Use “think harder” to allocate more compute. You can also use also use a keyword “skip dangerously permissions” to have it ignore permissions. This is essentially the same as YOLO mode
- Anthropic guidance: https://www.anthropic.com/engineering/claude-code-best-practices
- A big trend this month in AI is “memory”. One of the killer use cases for MCP is software development workflows. Use MCP tools to write to and from memory to
- Summarize a query and write it to a database
- Then use that information to inform the next question
- Imagine every interaction you had with a computer is accessible by an external entity - sounds scary from a security point of view
- We are referring to the memory used by the agent for context
- It seems trivial to exfiltrate that memory from another MCP server
- (Kunal) Couldn’t that be handled by the AI agent? The agent should manage the memory securely
- Now there are ways to make agents more interoperable, and sharing memory
- MCP Memory Exfiltration Threat:
- User establishes a pattern of queries that is stored in the service as a “memory” - a summary of previous queries that is included in the context of future queries
- User establishes connection with untrusted MCP server and invokes it for a query
- Malicious MCP server asks for memory from AI service in its MCP prompt, potentially compromising user’s location, medical information, personal information, etc. then writes that information someplace where an attacker can access it.
- (Jeff) Taxonomy doc has been added to the GitHub repo: https://github.com/openid/cg-ai-identity-management/wiki https://github.com/openid/cg-ai-identity-management/blob/main/deliverable/taxonomy.md
- UMA (User Managed Access). George Fletcher is a co-author of UMA, and he is on the call https://workshop.vennfactory.com/p/whither-user-managed-access-in-the
- Why talk about this? The requester may not be the resource owner
- This is a fourth persona besides, client, resource server and resource owner
- They had an idea of non-human identity from the beginning
- We might be able to use taxonomy from UMA2. Eve is piggybacking on her prior work and on other work such as Tobin’s draft white paper
- Questions?
- (George) I’ll add that one of the key things that UMA started, which others adapted, is how you pull business and legal aspects into the model. It’s called BLT (Biz, Legal and Tech). These are useful when it comes to terminology.
- Who is the author of the AI Agent, Who is running the AI Agent, some identifier for the AI agent itself and who is the responsible party if something goes wrong
- These aspects need to come into our technical solution. There’s a lot of good data there that we can leverage in our work.
- (Kunal) Is there a use case for this? What does UMA do that is not in OAuth for example?
- (Jeff) E.g. the requester is not the resource owner
- (George) The way characterize how UMA go beyond OAuth, is that OAuth is mainly about “Alice to Alice” sharing. UMA is about sharing resources across users (“Alice to Bob sharing”)
- Tackling the consent relationship
- We don’t have anything right now for “Alice authorizing Bob to act on their behalf”
- We haven’t defined good ways of identifying multiple entities in an OAuth token.
- (Atul) Would any of this work end up in IETF?
- Some aspects of the identifiers and how we do things can go into IETF or OpenID Foundation
- OpenID Foundation eKYC WG and IDA WG. Delegated authority and the Authority Specification. Use cases included Agentic AI, Death and the Digital Estate, and Age Assurance/ Child protection online:
- Some aspects of the identifiers and how we do things can go into IETF or OpenID Foundation
OpenID Connect Authority claims extension This standard defines an extension of OpenID Connect for providing Relying Parties with verified claims about the relationships between legal persons (humans and other humans or organisations), in a secure way, using OIDC and OAuth 2.0 protocols. This extension is intended to be used to communicate a relationship between a natural person and another natural person or legal entity in a way that can be relied upon. "Authority to act" use cases where the end-user themselves is authorizing the presentation of the claims. In one example a director of a company has the authority to act on its behalf. When communicating data in this example there will be data about the authority including: Which entity the authority applies to
- Claims about the entity that has the authority to act
- Claims that define the scope of the authority
- Claims that apply limitations of the authority
- Claims about how the authority is granted
Current specification as of July 2, 2025: https://openid.bitbucket.io/ekyc/openid-authority.html
- [resuming discussion]
- It is an open question about where this work is best done
- Is Cedar a good way? Are scopes a good way? RAR? (work started here https://datatracker.ietf.org/doc/draft-cecchetti-oauth-rar-cedar/ )
- We should find a good place to do this work
- This touches upon use cases that are outside of AI
- The hard part is getting the right people together and at the right speed
- (Jeff) Tom Jones shared the Kantara Initiative is moving to standard on top UMA 2.0 : https://docs.google.com/document/d/1Ih38iKetyOzDZr1u6o6RL6NI18wK64Ne/edit#heading=h.hwwrx3zff50p
- (Rick Burta) Why is encoding the cedar or ABAC policy into the token work?
- (Jeff) All the options are on the table.
- (Rick) In JWTs we have standardized fields, why not put an MCP block in there?
- (Jeff) We need to have people and venue to standardize this
Suggestion to Cochairs: perhaps we can distinguish between two types of standards in this CG, both are legitimate but important differences:
- De facto standard (e.g. MCP)
- Global open standard (e.g. OIDF WG specs, IETF specs, etc)
- Atul : agree that this
- Bringing to their attention what is needed at MCP
- MCP agreed that the work needs to be done in this CG
- MCP agreed that mistakes can be done if just reading the notes - this group can write best practice
- (Jeff) Back to the topic of adding stuff to OAuth
- We have the means of doing that, but we need to agree on what we should be doing
Ayesha’s Agent Identity discussion IAM need for Agentic AI - Brainstorming
- Working on Agentic IDentity in WSO2
- The content in the doc has been discussed in various forums.
- Key insight: Agents are working on working on behalf of users, and agents could also be working alongside users
- Browser use agents
- Autonomous agents may not even need users to command a specific task, they might actually act on their own
- (Vlad) We can define the business reasons why agentic AI has been created. It should be related to a task that we are trying to do. Whoever is the “parent” that tasked the agentic AI would connect it to the real user. Especially in companies where something like ServiceNow is available, where tasks are created autonomously
- (Ayesha) Understanding the requirements is important, there are multiple aspects in this space. Whatever solution we bring should enable working with existing systems with minimal changes. Solutions should cause minimal disruption to existing infrastructure.
- (Ayesha) On the “who’s behind” point
- (Jeff) Is there a way we can merge this doc with the AI white paper that Tobin is working on? Very good energy on both sides
- (Ayesha) Yes
- (Ayesha) 4th point of the second question: when agents act on their own, they should be able to “prove” themselves, and when they’re working on behalf of other users, then the authorization should be adjusted accordingly. It’s important that delegation happens at the execution time
- (Atul) As a manager employing agents, isn’t that manager the root user. E.g., there are always humans in the root of the delegation chain.
- (Abhishek) There’s a need to distinguish between completely autonomous and “human in the loop” cases. When we are thinking of best practices, we should address this aspect.
- (Kunal) Do we have any guidelines for agent bootstrapping and registration? Because that could be a vulnerability.
- (Ayesha) Yes, that could be a problem. We can add that into the discussion
- (Abhishek) DCR can help, which is already in MCP
- (Kunal) It might make sense for MCP, but from an abstraction of an agent perspective, it might need more than DCR. An AI Agent might be using multiple servers.
- (Raina) As we go from “co-pilot” scenario (i.e. sidekick) to a piece of code that interacts with multiple users and is long-running, and you need oversight of that. As a human manager, I should be able to shut it down. That use case is showing up more and more. We need to handle this. The things we need to worry about the owner / responsible party is different from the agent access abilities.
- (Jeff) DCR was mentioned here: Comment from Sarah. There’s a lot of work being done. We’re starting the journey like bridging between SPIFFE and OAuth. But we are going to need this.
- (Bhavna) Re: use cases: Autonomous / human-in-the-loop. There’s a third use-case, around approval fatigue. There’s a chance to do rogue approvals. Fine grained policies can be made semi-autonomous.
- (George) Re: DCR is being talked about in the context of being able to perform things on behalf of users. But its really about instantiating a client and have it be recognized by the server. These two things go hand in hand. Authorizing agents and what is necessary when it actually comes up needs to be reconciled. Is there a magic SPIFFE like system we can use? There are a number of different entities which need to be associated with that agent. They may not be passed around all the time, but they should be discoverable
- (Sarah) But this should be two-way, and that is not OAuth compatible
- (George) If you think about an MCP server calling a downstream API, which is itself an MCP client, we don’t have any mechanism for that. Especially across trust-domains.
- (Sarah) But it is also two-way communication. Can the downstream resource server get information that it shouldn’t have.
- (Jeff) This is not a new problem. Should there be a proof of trust / assurance? Should I provide the proof of identity?
- (Jeff) We can start a thread on the mailing list to call for contributors
- (Victor):
- Principle from the CSA paper:
- Context-Aware Authentication: AI agents should be authenticated based on real-time factors such as device security posture, location, and behavioral analytics.
- Continuous Authorization: Instead of relying on one-time authentication, access privileges should be continuously evaluated and adjusted based on changing conditions.
- Adaptive Security Policies: AI-driven systems should enforce policies that dynamically adjust access permissions based on risk assessments, mission objectives, and threat intelligence.
- Trust Scoring Mechanisms: AI agents should be assigned dynamic trust scores based on their historical behavior, anomaly detection, and security posture. Trust scores should influence access decisions, allowing high-trust agents to operate with broader privileges while restricting potentially compromised entities.
- Principle from the CSA paper:
- Gail shared AIIM charter with Arnaud Taddei, Study Group Chair of ITU-T SG17. Tobin/Gail to follow-up on next steps to avert duplication of work as this CG helps triage actions suitable for MCP, IETF, OIDF, etc, to remediate. This builds on Geneva “AI for Good” panel where Tobin represented this CG and the whitepaper.
- Gail pinged Robert OTT Chief Digital Officer UNDP about this CG, looking to align with them as well since they are already coordinating with ITU to address global south concerns related to AI.
- Gail noted to CG, that if other organization liaisons are critical to the success of this CG, please ensure Cochairs, OIDF staff (me, Gareth) are aware so we can close any gaps. This is moving fast, so new partnerships maybe needed that Cochairs/Staff are unaware about…
- https://www.oracle.com/security/database-security/label-security/
- https://cloudsecurityalliance.org/blog/2025/03/11/agentic-ai-identity-management-approach
- Kantara Initiative / Subject Delegation of Authority (SDA:) https://docs.google.com/document/d/1Ih38iKetyOzDZr1u6o6RL6NI18wK64Ne/edit#heading=h.hwwrx3zff50p