There has been a fair amount of effort on UCAN (User Controlled Authorization Networks), and other types of ‘decentralized credentials’ over the last couple years. These efforts perpetuate the same control structures that exist today, with delegated trees of hierarchical control. This is in contrast to a personal or ‘decentralized’ trust we might hope for in peer to peer networks. It is difficult to use DIDs, UCANs, or other proposed mechanisms for reputation and network formation without finding ourselves back trusting an authority – they are both easily captured and naturally lend themselves to centralization of control. We need a fundamentally different trust infrastructure in order to build resilient, peer to peer networks.
On non-hierarchical models for trust
The main barrier is not a technical one – we have seen technical implementations (e.g. the GPG web of trust) for decades. There is an intuitive design for how a flat trust model can be implemented. The problem lies in a dis-satisfaction from the emergent properties of that naive network structure. This tension has been framed in a couple different ways. One perspective is that the user experience in bootstrapping trust is overly cumbersome, and this friction leads to an insufficiently dense trust network. A different perspective on the same tension is that a user-driven trust system is at-odds with transitive / automatic trust relations, and that actions to ‘ease’ the user experience are fundamentally reducing user control.
We can find a space for exploration, by calling out this tension as a false dichotomy. The choice is not between a single authority vs user-directed trust links, but about distributing trust structures. There is a space for organic / automatic way to generate and allow for the reflection and evolution of trust that is neither user-directed nor rooted in a single authority. The bit-torrent tit-for-tat mechanism is one form of this, where protocol-compliant behavior leads to an increasing buffer for data transfer within the protocol.
Trust or Reputation
There is a related notion that is more regularly referred to in protocols as a concept of ‘reputation’. Reputation can be viewed as a property of a node in a system rather than one of an edge. (e.g. reputation is often constructed as a metric that is transitive, or where a node has a single consensus value. This is different from how we normally think of our personal trust in another user.)
What then exactly are we trying to capture in a measure for ‘Trust’? In the hierarchical systems of web 2, it’s meant to provide some assurance that “someone is who they say they are”. It isn’t an indication that there are ‘aligned beliefs’, but rather that the expected entity is behind a given identifier. The properties that come from systems like TLS / CAs look very similar to reputation in this sense. While each individual can over-ride and manually configure which authorities to trust, that definition of trust is meaning a confidence in adherence to protocol and of coherence between expectation and reality.
Scoping trust
A challenge we sometimes run into when talking about trust as it relates to technical networks is that our expectation of scope is typically much more limited in digital or transactional contexts than they are in real life. When you refer to a person as a “trusted individual”, the implication is not only that this is not an ‘imposter’, but also that the person has some level of altruism or aligned / positive motivations. While some formulations use reputation as a stand-in for this additional notion of trust, I would argue that it is perhaps better thought of as an understanding of motivations. The trust is that it is understandable what game someone is playing, what their motivations are, and thus what their rational behavior will be.
Narrow interactions, like those scoped in technical protocols, are intentionally limited to exclude externalities, but this also makes it difficult to understand if other nodes have ulterior motives in participating in the protocol. The analysis of what can be learned by a participant, and the other uses that can be derived from participation is not always easy to analyze, and the lack of completeness is unsatisfying. In contrast, the design of protocols to not leak information is difficult-to-impossible, and difficult to justify. Even the determination and understanding of risk present in a system is an expensive proposition.
Categorizing mechanisms
How do we build distributed notions that reflect this notion of confidence that another participant is also playing the same game as us?
If we take the narrower view of actions within the protocol, we can get to a somewhat useful taxonomy of work in this space.
- The bit-torrent tit-for-tat algorithm uses the demonstration from the other participant that they’re following the protocol as a signal to continue the conversation.
- A set of protocols use a proof of work, or computational puzzle as a way for participants to demonstrate that it is worth something to them to participate.
- Protocols like TLS have added revocation lists, and things shaped like “proofs of bad behavior” as ways to share knowledge of identities that have misbehaved. If the cost of creating an identity is high, and your misbehavior causes “reputational damage”, your rational behavior becomes more incentivized to follow the protocol.
- Finally, there is emerging growth of validation-based protocols. Cryptographic proofs are increasingly able to provide an assertion that computation has been performed per the expected protocol, and reduces the space of valid-but-not-compliant actions that can be taken.
The complement to this category are protocols that make use of external costs. In many cases the cost is difficult to quantify, which leaves modeling of the strength of the protocol trust levels equally difficult to pin down. At the same time, it means that there is the ability for costs to be higher relative to what could be built into a protocol in isolation.
- Protocols which involve a validation of ‘real name’ (linking an ID, bank account, cell phone, etc) are able to retaliate for misbehavior using the legal system.
- Protocols involving social graphs use the potential of negative impact to your standing with your friends.
- Protocols requiring registration with a phone number, or who distribute their app only for mobile devices are leveraging the cost of those assets as part of the account cost.
Increasing trust
From the previous categories we can see that there are two ways that they end up leaning on for increasing this notion of trust.
The first is increasing the cost of defection. Increasing the costs tied to creating or re-creating an account increase this cost. Impacting a reputation or decreasing utility likewise are ways to increase the cost of not following a protocol
The second way that trust is increased is by increasing a user’s confidence that they will be able to succeed in getting resolution when another user defects. In most of the ‘in protocol’ cost models, resolution occurs as part of the protocol itself. Bit-torrent won’t continue rewarding peers that aren’t honoring the tit-for-tat agreement. Submitting a computation without a valid proof transcript will be ignored. It is the out of protocol actions where this subjective confidence is most at issue. Actions like Facebook suspending Cambridge Analytica (and publicized moderation actions more generally) demonstrate to users that enforcement is taking place.
Full circle
How do we provide decentralized notions of trust that can be dense and mesh with protocol needs for automatic establishment?
By ensuring that the risk associated with a trust link is less than what can be mitigated when trust is broken. This can be done in one of three ways:
1. The benefit of breaking trust can be reduced
2. The cost associated with punishment can be increased
3. Regularity (or user perception) of breaking trust leading to punishment can be increased
Concretely, the hesitancy to form a mesh network comes most often from the lack of a concretely defined threat model. When a protocol comes with a well scoped definition of misbehavior, it is typically much easier to enforce compliance and to frame the protocol in a way that provides comfort to participants.
It’s worth noting that we are often concerned with one of the hardest forms of this scenario – which is balancing the ease of participation in a system with the indirect and difficult to identify surveillance risks. Concrete examples of this tension are nation-state identification of Tor users, RIAA identification of bit-torrent users, or IRS identification of crypto currency users. In all of these cases, a user joining the protocol may behave as normal, but may also record network identifiers of other participants they encounter. An unaccountable out-of-protocol leaking of these known identifiers then leads to repercussions to other participants. I don’t know if the preceding discussion is the best framing in this specific case. I think it can be used as a lens still, but the interesting question here is mostly around the first point of reducing the benefits around breaking trust, and in reducing the signal that such an attack gets in the initial level of participation in the protocol.