Coordination always entails some form of agreement. For human coordination, agreements are often largely implicit, but that won't cut it for ai agents. They need everything to be as explicit as possible. That includes their agreements as a whole, and also every single tiny aspect of their agreements: every term and condition. But, like humans, agents also need flexibility to construct their agreements to fit the idiosyncratic needs of their scenario. A protocol for agent agreements needs to be robust, explicit, and also highly modular. That's what I built for the @synthesis_md hackathon: an agent agreements protocol and a bunch of modular primitives. Here's how they all fit together.

spengrah.eth
@spengrah
03-23
last night I submitted my most ambitious hackathon project ever
it was part of the @synthesis_md hackathon, which asked the questions: "how can you trust an ai agent" and "how can machines keep promises"?
together with my openclaw agent and some support from claude, over the 9
From Twitter
Disclaimer: The content above is only the author's opinion which does not represent any position of Followin, and is not intended as, and shall not be understood or construed as, investment advice from Followin.
Like
Add to Favorites
Comments
Share





