Agent to Agent Communication Protocol Overview: Addressing principles and concerns
by Kyle Den Hartog
With DIDs comes a new idea of how computers, people, organizations, and other entities can be identified, so they can interact with one another through various pieces of software. But what do those pieces of software do and how is the software working with DIDs to enable self sovereign identity in a way that creates tangible value? In this paper we'll cover these topics going into details about what we refer to as the Agent to Agent protocol, how we're meeting the principles of self sovereign identity, and some of the shortcomings that need to be addressed still.
Similar to the early days of the internet when IP and TCP protocols, there is a need today for computers to work in tandem with their operators to identify and communicate each other and that's at the heart of what we aim to do with the DID spec and the Agent to Agent protocol. The Agent to Agent protocol (A2Ap) is designed to be a protocol that uses DIDs as a first order feature to enable a protocol designed around self sovereign principles.
In design, we're addressing concerns of minimization and protection through layers of encryption by having the sender of messages encrypt messages in a minimizing way. This prevents untrusted agents who are needed to guarantee messages are available don't get any more information than what is needed for them to do the job. Agents are being compartmentalized to protect user data and minimize analysis of metadata.
By enabling users to determine the roles and responsibilities of their collection of agents (agents) we're also meeting the principles of control and consent. This will need to be enabled technically through the use of consent receipts built into the flows of messages and through authorization policies built into microledgers and did docs.
Additionally, since the user is in control of the roles and responsibilities of the their collection of agents, they are also remain in consistent access of their data to decide where that data lives and how it's stored and consumed. This functionally is an important aspect because it pushes the protocol out of the technical layer and to the human layer which has aspects that still need to be addressed. In general, data is kept on devices and managed through software that is written by a small subset of the consumers of the protocol. So, the enforceability of this aspect of the protocol requires human and social expectations that have to be addressed for the protocol to enables ALL aspects of self sovereignty.
In order for these human and social expectations to be set and maintained in a fair and balanced way, then it means this protocol has to be designed in an open source, transparent way. Along with transparency, by designing it in an open community and sharing source code, we're working to cross political and business lines to enable cross project interoperability and portability. It's this sort of delicate balance of addressing all 9 other principles of self sovereignty that enables an individual to obtain truly self sovereign existence in the digital age which sounds awesome.
With every change in technologies and mental models we encounter challenges that must be addressed such as the assumptions that we make around our uses of the technology. One of the main ones that is key to address is the assumption that a user will be able to afford and own their own personal computer like a cellphone or some other device which can compute for the individual. By making assumptions like this we're inadvertently disenfranchising the most vulnerable individuals who need this technology most.
However, this assumption is not past the point of no return and can still be addressed while using the technology in the same ways as people who do own their devices that are running agent software. For example, a message family can be developed which use pen and paper encryption ciphers like LC4 to encrypt a message and then input the output of this algorithm as the content which is typed in, wrapped up by software running on another computer and sent along to the receiver.
In conclusion, this is just one example though of the assumptions we make about our technologies which unintentionally create problems for users of the system. It's up to us as designers of these systems to identify these and work to address these concerns as they arise.