I met
We recently reconnected to talk about privacy, coercion, and what gets lost when systems scale faster than the values that guide them.
What does privacy mean to you, and why should it still matter in a world that’s traded it for convenience?
I’ve spent decades working in security, cryptography, and digital identity, and over the years, I’ve been pretty deliberate about how I use the word privacy. It’s not just some feature you tack on—it’s a principle. Something foundational.
Back in the early ’90s, I supported export freedom for tools like PGP and co-authored the SSL/TLS standard. But even then, I avoided calling SSL privacy tech, because I felt the word was already doing too much. I preferred to talk about trust, confidentiality, and integrity.
Even now, I’m not a huge fan of the term privacy, at least not on its own. I don’t think it captures what people are trying to protect. I remember going to a privacy conference and realizing how differently people were using the word. Not just folks from different countries, but from different backgrounds, life experiences, genders... Everyone was saying “privacy,” but they weren’t talking about the same thing at all.
I’ve come to see privacy as showing up in four different ways. First, there’s defensive privacy—the kind you hear about most in security circles. Things like protecting your credit card or Ethereum keys from being stolen.
Then there’s human rights privacy, protecting your ability to speak, assemble, and participate economically. When that kind of privacy is compromised, it opens the door to coercion and exclusion.
The third kind—what I call personal privacy—is more about boundaries. The idea that “good fences make good neighbors.” I should be able to do what I want in my own space, on my own terms, so long as it doesn’t harm anyone. That perspective tends to show up more often in Western or libertarian contexts, but not exclusively.
And finally, relational privacy. It’s the need to keep different parts of your life separate: For example, you might not want your professional identity conflated with your role as a mother, or your life as an artist confused with your work as a bookkeeper. Those contexts use different languages, and when they get flattened into one, it causes all kinds of issues.
These categories obviously overlap, but each protects something meaningful: your safety, your freedom, your relationships, and your sense of self. I wrote about this back in 2004, and revisited it about a decade later in
The big problem now is that we’ve normalized surveillance. Convenience is seductive—it’s easy to give away bits of yourself just to get something done faster. But privacy is still one of the last defenses we have against coercion. Without it, dissent becomes dangerous. Creativity starts to shrink. Identity turns into something issued by a system, rather than something we shape ourselves.
That’s why it still matters—maybe more than ever. And it’s something I try to keep in the forefront anytime we talk about
Privacy as anti-coercion reminds me of cypherpunk values. What would the cypherpunks of the ’80s and ’90s say about how things are today?
They’d be dismayed—not because the tools don’t exist, but because we’ve compromised on our values.
Cypherpunks believed that strong cryptography could empower individuals over institutions. That was the whole point: use math to create freedom. But today, even in Web3, we often see technologies marketed as “decentralized” while quietly reinforcing centralized control.
Worse, coercion is back in vogue—often justified by threats like terrorism or child safety. We’ve failed to counter the narrative of the so-called “
Of course, a lot of the cypherpunks are still around, so you can actively poll them. But I do believe that, given the context of the time, they would certainly be going, “We told you so.”
In hindsight, some might now agree that they also contributed to the problems in various ways. It’s the classic case of the perfect being the enemy of the good. Many were so focused on what they wanted to do that nothing got done. And now I’m seeing people with good intentions making compromises—and I think we’re losing the battle on the other side.
Where do we draw the line? What’s the balance between being pragmatic and sufficiently value-based? This is one of the most important challenges we face. And we’ve gone way too far down the path of “just get it working—we’ll fix it later.” The last 15 or 20 years have shown us: Some of these things are really, really hard to fix after the fact.
So, cypherpunks wouldn't just critique the surveillance state—they’d also call out us technologists for enabling it. We were supposed to resist, not retrofit.
What do you think about the new wave of privacy efforts in Web3, from Devcon’s cypherpunk track to initiatives like Web3PrivacyNow?
I welcome this revival, but remain skeptical about where it’s headed.
I’m not very involved in the Ethereum ecosystem these days, especially the token-centric parts. And that’s mainly because the tokenomics, in many ways, has trumped—well, it has a double meaning now—privacy. The incentives push people to promote tokens, boost visibility, and encourage sharing. All of which run counter to privacy.
I remember having a deep conversation with Vitalik back in 2014, before Ethereum launched. At the time, he approached privacy more from a libertarian perspective. Only more recently has it become something he calls fundamental. But because it wasn’t a priority early on, a lot of today’s problems stem from those early decisions.
I still can’t believe most Ethereum transactions rely on a single public key. You can talk about account abstraction and all that, but we were already saying over a decade ago: don’t reuse the same key. And we haven’t even really begun to unpack the correlation risks. Take what I sometimes call quasi-correlation—what smart contracts you use, what services are involved, what DNS you rely on to access supposedly decentralized infrastructure. All of it leaks context.
Zero-knowledge (ZK) proofs are gaining traction—but so are Trusted Execution Environments (TEEs). What’s your take on both?
I like a lot of the work being done with zero-knowledge proofs. They align with cypherpunk goals—
Trusted Execution Environments—TEEs—are a different story. I’ve never been a fan. Every time someone says, “Oh, we’ve fixed Intel’s issues,” the problems show up again in some other form. Side-channel attacks, vendor backdoors, lock-in—it’s a long list. In practice, they’re “trusted” in name only, and none are truly open.
Even when people are trying to do the right thing, the hardware stack gets in the way. We’ve hosted four
That’s why I lean more toward multi-party computation. We’ve run a number of
Whatever the tooling—ZK, TEE, or anything else—it needs to
Is this then also a centralization problem?
Definitely—that’s the other side of TEEs. Back when I was VP of Developer Relations at Blackphone, we had a quarter billion in funding, and still couldn’t access custom chip work. Later, I consulted for HTC—then one of the world’s largest phone makers—and even they were limited in what they could do with chip designs.
They found a workaround by embedding a MicroPython interpreter inside the trust zone to run Trezor code and support SecP—but it only shipped on a few devices. Even Apple, with all its power, is constrained by treaty law. These aren’t just corporate secrets—they’re international agreements that restrict access to the chip’s lower layers.
That creates a new kind of gatekeeping. Apple and Google can effectively say, “You want secure digital identity on a phone? Then you do it our way.” I wouldn’t call it coercion in the strongest sense, but it’s absolutely a form of central control. We’ve replaced one kind of trust assumption with another, and not necessarily a better one.
You’ve seen decentralization get co-opted. What are the early warning signs something’s been captured?
I’ve jokingly referred to this as “Allen’s Impossibility Theorem”—kind of like
Bitcoin was the first real proof that decentralized consensus could work at scale. Earlier protocols needed supermajorities—two-thirds or more—to reach agreement. Bitcoin showed that a simple majority of hash power was enough, which was pretty amazing. But one of the pillars of Bitcoin is Bitcoin Core—a small group of developers who tightly constrain changes to the codebase. So tightly, in fact, that many argue Bitcoin shouldn’t even be re-implemented in another language. You need to use the exact same C code—bugs and all. That creates a kind of centrality. I’d call it a benevolent dictatorship, but it’s a chokepoint nonetheless.
I’ve seen the same dynamic play out in digital identity. Verifiable credentials, for example, usually come from a government or institution—centralized issuers. “You’re allowed to drive” becomes “you’re allowed to board a plane” or “enter a bar.” Suddenly, that one credential is doing a lot more than what it was intended for.
In India, for example, you often can’t rent a room or a car without providing your Aadhaar number. What started as an identity system has become a prerequisite for basic participation in society.
Can we instead decompose these credentials into smaller parts? Proof of age, proof of insurance, proof of a passed driving test—each coming from different sources, instead of all flowing from one authority.
One promising approach is what Utah’s doing with its
But these kinds of innovations are rare. What I often see instead are systems that prioritize shipping over principles. It’s spec-first, values-later. Governance gets captured by insiders. DNS, app stores, revocation, and compliance—all become chokepoints. Control without accountability.
And then there’s over-identification. I remember speaking with regulators in the Netherlands who said, “We have three levels of authentication—so let’s just require the highest for everything. Fingerprints, iris scans, the works.” But that kind of thinking can cause real harm.
The Netherlands was one of the most tolerant countries in Europe pre–World War II, but still had one of the highest Holocaust death rates. Around 75% of Jews were killed there, compared to 25% in France, a historically less tolerant country. Why? In large part, because the Netherlands had highly efficient identity systems. And that efficiency
You shouldn’t need to scan your fingerprint just to report a pothole on the road. When you ask people to over-authenticate for simple civic actions, you're turning public participation into a liability.
And we see new forms of this every day. Platforms like Facebook and Google shut off access without notice. You don’t just lose your profile—you lose your email, your ads, your calendar, your business. There’s no human to talk to. No appeal process. No recourse.
The most dangerous systems are the ones that fail quietly. When you start seeing patterns like opaque governance, centralized control, and no way out—that’s when alarm bells should go off. Because if your architecture doesn’t resist coercion or mitigate correlation, it’s not decentralized. It’s just centralization in disguise.
If you were building a privacy-first system today, what would be non-negotiable in its design?
First is data minimization:
People tend to underestimate how dangerous stored, linkable data can be. Maybe you used selective disclosure to prove something once—say, you’re over 18 or you live at a certain address. But if the verifier stores that and links it with other disclosures later, your privacy erodes over time. Good systems make it impossible to reveal more than what’s needed.
The second is
When we met at the Internet Archive, we had a shared context. You could check who I am, read about my work, see if I’m responsive. That doesn’t mean you immediately hand over sensitive information—but it’s enough to take a small risk. That’s how trust forms: in layers, over time, through presence, participation, and repetition.
The third thing is scale. I get wary when someone says, “This is the solution for the world.”
That’s usually when things break. If you look at Elinor Ostrom’s Nobel-winning work, she says the same thing: You need the proper level of subsidiarity in these systems. Otherwise, you get bad effects, and the commons fail.
What if, instead, you built something for a small group? Say, 70 people in a World of Warcraft guild who need more trust to show up for a raid on time. That’s a completely different beast.
That’s the kind of design I care about. Build small. Build intentionally. And design systems that reflect how people actually live and relate to one another.