paint-brush
Psychology of the dWeb: Incentives to cooperate on the decentralized internetby@ambercazzell
903 reads
903 reads

Psychology of the dWeb: Incentives to cooperate on the decentralized internet

by Amber CazzellJuly 31st, 2019
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Richard Hendricks' dream of the new internet is to come to fruition, it’s going to mean we’re relying on our friends, neighbors, and even strangers to provide us with information and connection to the outside world. It's going to take communities of ultra-cooperation to work, but can we rely on networks of people (some or all of whom are self-interested) to deliver? Call me the optimist, but I think humans actually have the necessary "stuff" to make it work well.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coins Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Psychology of the dWeb: Incentives to cooperate on the decentralized internet
Amber Cazzell HackerNoon profile picture

Interest in the dWeb is blossoming, but decentralized systems require cooperation in order to work. If Richard Hendricks’ dream of the new internet is to come to fruition, it’s going to mean we’re relying on our friends, neighbors, and even strangers to provide us with information and connection to the outside world. It’s going to take communities of ultra-cooperation to work. But can we rely on networks of people (some or all of whom are self-interested) to deliver? Call me the optimist, but I think humans actually have the necessary "stuff" to make it work well. Here, I lay out some of our evolved moral fabric, and how that moral fabric can be tailored to socially scale the decentralized web.

Cooperation refers to win-win interactions among peers. This is in contrast to one person dominating from a transaction at the expense of the other (cooperation is not altruism or free-riding). One of the most common forms of cooperation is direct reciprocity, “you give me this, and I’ll give you that.” Reciprocity is an incredibly strong ethic in humans—one that Benjamin Franklin picked up when dolling out perhaps a surprising piece of wisdom: “If you want to make a friend, ask for a favor.” This is effective because it makes future interactions more likely; the person who gave you a favor is more likely to seek you out and ask for a return favor. This back-and-forth keeps the social balance-sheet open.  Salespeople and charities
have also picked up on the effectiveness of appealing to our ethic of
reciprocity. Perhaps you’ve received personalized address labels or cute kitten calendars in the mail—a gift accompanied by a pre-stamped donation return envelope.

This reciprocity creates social capital that helps us all survive. We all come out ahead when we return favors, or when we collaborate to accomplish something that’s impossible to achieve alone. Studies of primates have revealed that cooperation is generally paid out with equitable resources to encourage future interaction—for instance, if primates hunt big game together, the monkeys which accomplish capturing the game tend to receive more of the meat than the other primates who participated, but were less pivotal. This equitable reimbursement ensures that those primates continue to have buddies to hunt with in the future. Nobody wants to include selfish, meat-hoarding monkeys on their hunts. Humans have evolved reciprocity to an even greater degree.

But, as this example may imply, constant cooperation is not a foolproof technique to ensure one’s own (or even the whole species’) survival. Constant cooperation is exploitable, it only takes one free-rider to ruin its effectiveness by stripping the “always cooperators” of their resources. This was the downfall of Napster’s predecessor, Gnutella. Gnutella got around the centralized directory that Napster relied on, but did so by assuming that its user base would cooperatively upload files online as requests from other users came along. Unfortunately, it only takes a few free-riders to ruin it for the cooperators. In the case of Gnutella, there were a lot: the vast majority of their user base would request music, but wouldn’t upload music. The burden became too heavy for the cooperative users, and Gnutella largely fell apart. It had a faulty premise about human nature: we don't always cooperate.

In fact, even when it seems to be in out best interest, humans aren’t “always cooperators.” Today, game theorists illustrate this issue with the prisoner’s dilemma. Two partners in crime are arrested on minor charges and questioned separately. They are told if they rat out their partner on a larger crime (and their partner stays quiet), they will walk out free. If they both stay quiet, they will receive only light prison time for the minor charges. If, however, their partner rats them out for the larger crime, too, then they will face a long prison sentence. In a single round of the prisoner’s dilemma, people tend to stop cooperating with their crime partner and this results in a bad situation: long prison sentences for both prisoners.

Perhaps obviously, behaviors in the prisoner’s dilemma change dramatically if players are required to play multiple rounds with the same partner. That is, we’re more likely to cooperate with each other when we’re stuck with each other for the foreseeable future. Evolutionarily, we want to keep our ingroups happy, because we partially depend on them for our survival. (Don’t make the one guy who knows how to make fire mad at you just before winter…) But, if always cooperating is exploitable, and making people mad is bad for business what’s the right balance?

The mathematical psychologist Anatol Rapoport developed an algorithm that won a tournament of repeated prisoner’s dilemma games and largely solved the cooperation strategy problem. His solution operates off a familiar principle: Tit-for-Tat. Cooperation between peers is maximized according to these simple rules: 1. Cooperate on first interactions with a stranger. 2. On subsequent interactions, copy what the other player did (cooperate/defect) in the prior round. That’s it. If two peers play according to this algorithm, they will always cooperate. If one of the players doesn’t use this algorithm, they experience punishment until they change their ways. Thus, the initially uncooperative peer is incentivized to cooperate in future interactions.

Bram Cohen, who created the now popular BitTorrent protocol, used the Tit-for-Tat principle as its backbone. Gnutella transferred whole files between peers at a time, effectively operating as a single round of the prisoner’s dilemma. By breaking files up into small bits, BitTorrent forced the file transfer process into one of a repeated prisoner’s dilemma. As user’s download pieces of their target file from a given swarm of others, they simultaneously upload pieces of files to the peers from whom they’ve downloaded the most data recently. In contrast to Gnutella, BitTorrent successfully coordinated cooperative file-transferring, and once boasted managing 43% of internet traffic.



But holdup—the problem of peer-to-peer connections hasn’t been totally alleviated. These sorts of "cooperation" algorithms are still exploitable if the assumptions they are based on are faulty—namely if the repeated rounds aspect is questionable. One way this assumption can be violated in P2P algorithms is when groups are sufficiently large, such that free-riders can effectively avoid repeat rounds of cooperation dilemmas. The tit-for-tat principle doesn’t work very well in large groups, because you can trade partners without fear that you’ll run out of “suckers” to interact with. If everybody follows rule #1 of the Tit-for-Tat principle, then free-riders can take advantage of initial cooperation by interacting with others exactly one time.

And we actually saw algorithmic responses to BitTorrent which would allow you to download files without uploading files by constantly finding new “swarms” from which to download the next needed file pieces (without needing to give file pieces back). One example of this is BitThief. The internet is nothing if not a massive group. There’s no shortage of cooperative “suckers” online. So now what? The dWeb requires, by definition, large groups of people.

Well, one option is for large groups to evolve a collective ethic of punishing people who have harmed others (not just punishing those who harm the self). But in order to do this, humans would need to be able to detect free-riders. As it turns out, humans are very sensitive to detecting cheaters—more so than they are sensitive to similar unfair outcomes happening by “accident.” For instance, babies as young as 15 months are surprised when a person distributes goods unequally (i.e. when people who put the same levels of work in get different payoffs) but show no such surprise when the unfair outcome is distributed via non-conscious actions (i.e. physical constraints of the environment cause the "unfair" payoffs).

But detecting cheaters isn’t enough to deter them, they must actually punish free-riders. And, indeed, people interacting in large groups also conferred reproductive advantages by enacting third-party punishment. That is, humans evolved to punish people for wrongdoing, even when the wrongdoing was not directed at themselves. This tendency has been confirmed by psychological studies: not only are humans willing to punish third-party bad actors, but they are willing to incur personal cost to do so, and experience pleasure in correcting third-party free-riders in spite of these costs.

But we still aren't home free. One unique challenge the digital world has faced in detecting and subsequently punishing cheaters is reputation management. Individuals in large groups can protect themselves from third party punishments if their identities are unknown. Two common approaches to online reputation management include Sybil attacks (one person creates multiple accounts to boost their online popularity) and white-washing (a person opens a new account in order to shed a bad reputation on their old account). Decentralized identity platforms like Sovrin or Iris may help get around this by making it sufficiently difficult to re-create new identities, though it is unlikely that any platform can be completely immune from these reputation management schemes.

Recently there have been a number of suggestions for making peers in a decentralized network offer up some sort of “stake” which can be “burned” or "slashed" in the event punitive action needs to be taken against a troll or free-rider. The logic is that people won’t free-ride or otherwise be uncooperative if doing so incurs a cost to themselves. Alongside digital identity, stake-slashing offers a potential solution to such bad actors, because stake could theoretically be burned before they have an opportunity to withdraw from a platform. But these sorts of suggestions need to be considered gingerly, particularly for social networks.

Humans are prone to enacting revenge, and bad actors could, in turn, burn the stakes of innocent others to discredit their punishers. This can create a regression of stake-burning, anti-social behavior—probably the exact opposite of what the dWeb needs. There is potential to curb such stake-burning by requiring the punisher to incur a cost themselves—though even this suggestion should be implemented with caution. As noted previously, people don’t mind incurring some small costs for the pleasure of punishing bad actors. Online, the potential pool of punishers is large, and it wouldn’t be inconceivable for micropunishments from numerous third parties to become far too harsh. Punishments should probably be local in some sense—meaning the free-rider is punished only in the context of the relationship between the wrong-doer and the punisher, not globally across a protocol.

Another consideration in bad-actor detection is allowing for a margin of error. Humans are pretty bad at reading tone and intention in online communications. It’s not difficult to imagine a miscommunication occurring in which one party burns another’s stake after misunderstanding what was intended. Likewise, peers on a network that accidentally go down might look as though they are free-riding. For instance, if a peer involved in a BitTorrent protocol went down multiple times, honest nodes might think that peer is using BitTheif and throw them off the network.

Fortunately, the evolution of morals also gives us tips about how we might handle these sorts of cooperation “accidents” and previous bad actors turned cooperative: forgiveness. Forgiveness is clearly seen throughout human social functioning, but it’s also rampant in chimpanzees and bonobos as well. Primatologist Frans DeWall has noted that primates, after a fight, are often seen extending hands to one another, and offering comfort in the form of hugs or sex. Typically, reparations are initiated on the part of the bad actor (or the peer who did the questionable behavior of going offline). And there is computational evidence to back up the usefulness of forgiveness mechanisms: simulation studies have revealed that resources are maximized in groups which instill some sort of program of forgiveness.

In fact, forgiveness is the only performance tweak to the tit-for-tat protocol that I am aware of. It’s called “generous tit-for-tat,” and it’s not hard to see why it offers an improvement in the real, messy world of misunderstandings and hardware malfunctions. Imagine that you’re playing a repeated round of the prisoner’s dilemma. Things seem to be going well, and both you and your partner are operating on the original tit-for-tat principle. Then, all of a sudden, you go offline. When you come back online and interact with the peer, a funny thing happens: they screw you over. What neither of you realize is that they thought you screwed them over when you went offline, so they are returning your free-riding favor. Now since you both run on tit-for-tat, you’re stuck in an infinite loop of not cooperating. Generous tit-for-tat offers a simple tweak: if your partner is uncooperative, frequently (but don't always) reciprocate. This forgiveness tweak allows the peers to restore cooperation in the future, thus extracting mutual benefits once more.

A quick word about “trustlessness.” I like the idea, but I’m
skeptical about our ability to deliver trustless algorithms.
Cryptocurrencies, despite being heralded as trustless, are simply not (yet) trustless. 51% attack aside, double-spending is still a problem for relatively new transactions in which a fork can overtake the main chain—hence the advice to “wait six blocks” before delivering a good or service. Even if all of the (many) cryptocurrency attacks are patched, these transactions are meant to reflect what’s happened in the not-so-trustless world. This is partly why multi-sig and transaction mediator systems have emerged. These mediator businesses indicate a trust issue in my opinion. So long as the dWeb is used to enhance connectivity of humans, trust is relevant.

Much of the dWeb technology is new, and I cannot say for certain whether full trustlessness is achievable. However, I can say that we humans have evolved to use trust as a tool for expanding our potential via cooperation. And we’ve come a long way. While we’re slowly improving our algorithms of “trustlessness,” I hope some of the evolved moral-social tendencies I’ve outlined can be implemented into our algorithms as well. We humans are pretty savvy about navigating mesh networks. Afterall, we naturally are one.

Disclaimer: I am building out the decentralized web with ERA, a peer-to-peer database (GUN), identity (Iris), and token (AXE) company.

If you enjoyed this article, you might want to check out my interviews with dWeb and crypto startup founders on YouTube. I love making new friends--please also connect with me on Twitter!