The Co-Writer of TLS Says We’ve Lost the Privacy Plot

Written by terezabizkova | Published 2025/04/24
Tech Story Tags: open-source | privacy | digital-identity | identity | self-sovereign-identity | web3 | christopher-allen | hackernoon-top-story

TLDRChristopher Allen—co-author of TLS 1.0 and a key figure behind the W3C Decentralized Identifier standard—has spent decades shaping the foundations of secure, open digital infrastructure. In this in-depth conversation, he reflects on the evolution of privacy, the compromises that have crept into Web3, and why decentralization isn’t just a technical question—it’s a values one. From cypherpunk origins to modern chokepoints like TEEs and platform control, Allen lays out the early warning signs of capture, the dangers of over-identification, and why systems that don’t resist coercion are just centralization in disguise.via the TL;DR App

I met Christopher Allen on the steps of the Internet Archive during the FtC conference in San Francisco. He shared stories about early internet culture, and I left wondering—how does someone who’s spent decades building secure, open systems feel about where things are headed now?

Christopher co-authored TLS 1.0, the cryptographic protocol behind HTTPS that still secures most websites and apps today. He also helped develop the W3C Decentralized Identifier (DID) standard, a foundation for digital identity. He’s the founder of Blockchain Commons and a longtime advocate for open standards and human-centric infrastructure.

We recently reconnected to talk about privacy, coercion, and what gets lost when systems scale faster than the values that guide them.

What does privacy mean to you, and why should it still matter in a world that’s traded it for convenience?

I’ve spent decades working in security, cryptography, and digital identity, and over the years, I’ve been pretty deliberate about how I use the word privacy. It’s not just some feature you tack on—it’s a principle. Something foundational.

Back in the early ’90s, I supported export freedom for tools like PGP and co-authored the SSL/TLS standard. But even then, I avoided calling SSL privacy tech, because I felt the word was already doing too much. I preferred to talk about trust, confidentiality, and integrity.

Even now, I’m not a huge fan of the term privacy, at least not on its own. I don’t think it captures what people are trying to protect. I remember going to a privacy conference and realizing how differently people were using the word. Not just folks from different countries, but from different backgrounds, life experiences, genders... Everyone was saying “privacy,” but they weren’t talking about the same thing at all.

I’ve come to see privacy as showing up in four different ways. First, there’s defensive privacy—the kind you hear about most in security circles. Things like protecting your credit card or Ethereum keys from being stolen.

Then there’s human rights privacy, protecting your ability to speak, assemble, and participate economically. When that kind of privacy is compromised, it opens the door to coercion and exclusion.

The third kind—what I call personal privacy—is more about boundaries. The idea that “good fences make good neighbors.” I should be able to do what I want in my own space, on my own terms, so long as it doesn’t harm anyone. That perspective tends to show up more often in Western or libertarian contexts, but not exclusively.

And finally, relational privacy. It’s the need to keep different parts of your life separate: For example, you might not want your professional identity conflated with your role as a mother, or your life as an artist confused with your work as a bookkeeper. Those contexts use different languages, and when they get flattened into one, it causes all kinds of issues.

These categories obviously overlap, but each protects something meaningful: your safety, your freedom, your relationships, and your sense of self. I wrote about this back in 2004, and revisited it about a decade later in The Four Kinds of Privacy.

The big problem now is that we’ve normalized surveillance. Convenience is seductive—it’s easy to give away bits of yourself just to get something done faster. But privacy is still one of the last defenses we have against coercion. Without it, dissent becomes dangerous. Creativity starts to shrink. Identity turns into something issued by a system, rather than something we shape ourselves.

That’s why it still matters—maybe more than ever. And it’s something I try to keep in the forefront anytime we talk about designing systems with values at their core.

Privacy as anti-coercion reminds me of cypherpunk values. What would the cypherpunks of the ’80s and ’90s say about how things are today?

They’d be dismayed—not because the tools don’t exist, but because we’ve compromised on our values.

Cypherpunks believed that strong cryptography could empower individuals over institutions. That was the whole point: use math to create freedom. But today, even in Web3, we often see technologies marketed as “decentralized” while quietly reinforcing centralized control.

Worse, coercion is back in vogue—often justified by threats like terrorism or child safety. We’ve failed to counter the narrative of the so-called “Four Horsemen of the Infocalypse,” and as a result, we’re ceding more and more ground to surveillance and control.

Of course, a lot of the cypherpunks are still around, so you can actively poll them. But I do believe that, given the context of the time, they would certainly be going, “We told you so.”

In hindsight, some might now agree that they also contributed to the problems in various ways. It’s the classic case of the perfect being the enemy of the good. Many were so focused on what they wanted to do that nothing got done. And now I’m seeing people with good intentions making compromises—and I think we’re losing the battle on the other side.

Where do we draw the line? What’s the balance between being pragmatic and sufficiently value-based? This is one of the most important challenges we face. And we’ve gone way too far down the path of “just get it working—we’ll fix it later.” The last 15 or 20 years have shown us: Some of these things are really, really hard to fix after the fact.

So, cypherpunks wouldn't just critique the surveillance state—they’d also call out us technologists for enabling it. We were supposed to resist, not retrofit.

What do you think about the new wave of privacy efforts in Web3, from Devcon’s cypherpunk track to initiatives like Web3PrivacyNow?

I welcome this revival, but remain skeptical about where it’s headed.

I’m not very involved in the Ethereum ecosystem these days, especially the token-centric parts. And that’s mainly because the tokenomics, in many ways, has trumped—well, it has a double meaning now—privacy. The incentives push people to promote tokens, boost visibility, and encourage sharing. All of which run counter to privacy.

I remember having a deep conversation with Vitalik back in 2014, before Ethereum launched. At the time, he approached privacy more from a libertarian perspective. Only more recently has it become something he calls fundamental. But because it wasn’t a priority early on, a lot of today’s problems stem from those early decisions.

I still can’t believe most Ethereum transactions rely on a single public key. You can talk about account abstraction and all that, but we were already saying over a decade ago: don’t reuse the same key. And we haven’t even really begun to unpack the correlation risks. Take what I sometimes call quasi-correlation—what smart contracts you use, what services are involved, what DNS you rely on to access supposedly decentralized infrastructure. All of it leaks context.

Zero-knowledge (ZK) proofs are gaining traction—but so are Trusted Execution Environments (TEEs). What’s your take on both?

I like a lot of the work being done with zero-knowledge proofs. They align with cypherpunk goals—selective disclosure and data minimization, less data exposure, more verifiability. But many of the newer implementations feel complex and opaque. They’re hard to audit. In some cases, we’re seeing new trusted setups or gatekeepers that aren’t getting enough scrutiny.

Trusted Execution Environments—TEEs—are a different story. I’ve never been a fan. Every time someone says, “Oh, we’ve fixed Intel’s issues,” the problems show up again in some other form. Side-channel attacks, vendor backdoors, lock-in—it’s a long list. In practice, they’re “trusted” in name only, and none are truly open.

Even when people are trying to do the right thing, the hardware stack gets in the way. We’ve hosted four Silicon Salons to dig into this. Many smart folks are working on it, but they still hit the same wall: you can open parts of the chip, sure—but once you get down to the substrate, it’s all proprietary. Unless you own the fabrication plant (and even then, you're bound by contracts and patents), you’re stuck with trust assumptions you can’t verify.

That’s why I lean more toward multi-party computation. We’ve run a number of workshops around FROST, which shifts the trust model away from any single machine. Sure, you could do a side-channel attack on one quorum member—but then you’d have to do the same for all the others. It raises the bar. It’s not perfect, but it’s a step in the right direction.

Whatever the tooling—ZK, TEE, or anything else—it needs to serve deeper principles: data minimization, least necessary access, progressive trust, and selective disclosure. Privacy isn’t just about what you prove. It’s also about what you can withhold.

Is this then also a centralization problem?

Definitely—that’s the other side of TEEs. Back when I was VP of Developer Relations at Blackphone, we had a quarter billion in funding, and still couldn’t access custom chip work. Later, I consulted for HTC—then one of the world’s largest phone makers—and even they were limited in what they could do with chip designs.

They found a workaround by embedding a MicroPython interpreter inside the trust zone to run Trezor code and support SecP—but it only shipped on a few devices. Even Apple, with all its power, is constrained by treaty law. These aren’t just corporate secrets—they’re international agreements that restrict access to the chip’s lower layers.

That creates a new kind of gatekeeping. Apple and Google can effectively say, “You want secure digital identity on a phone? Then you do it our way.” I wouldn’t call it coercion in the strongest sense, but it’s absolutely a form of central control. We’ve replaced one kind of trust assumption with another, and not necessarily a better one.

You’ve seen decentralization get co-opted. What are the early warning signs something’s been captured?

I’ve jokingly referred to this as “Allen’s Impossibility Theorem”—kind of like Arrow’s Theorem in voting systems. Arrow showed there’s no perfect voting system because the values you’re optimizing for eventually start to conflict. I think the same applies to decentralization. Every architecture makes trade-offs, and if you’re not honest about those trade-offs, they quietly pull the system away from its values.

Bitcoin was the first real proof that decentralized consensus could work at scale. Earlier protocols needed supermajorities—two-thirds or more—to reach agreement. Bitcoin showed that a simple majority of hash power was enough, which was pretty amazing. But one of the pillars of Bitcoin is Bitcoin Core—a small group of developers who tightly constrain changes to the codebase. So tightly, in fact, that many argue Bitcoin shouldn’t even be re-implemented in another language. You need to use the exact same C code—bugs and all. That creates a kind of centrality. I’d call it a benevolent dictatorship, but it’s a chokepoint nonetheless.

I’ve seen the same dynamic play out in digital identity. Verifiable credentials, for example, usually come from a government or institution—centralized issuers. “You’re allowed to drive” becomes “you’re allowed to board a plane” or “enter a bar.” Suddenly, that one credential is doing a lot more than what it was intended for.

In India, for example, you often can’t rent a room or a car without providing your Aadhaar number. What started as an identity system has become a prerequisite for basic participation in society.

Can we instead decompose these credentials into smaller parts? Proof of age, proof of insurance, proof of a passed driving test—each coming from different sources, instead of all flowing from one authority.

One promising approach is what Utah’s doing with its new digital identity law. They’re not issuing identities themselves. Instead, they’ve set requirements and allow third parties to apply for an endorsement. If the solution meets their criteria, the state accepts it. That creates more diversity in how identity is structured, and less reliance on a single issuer.

But these kinds of innovations are rare. What I often see instead are systems that prioritize shipping over principles. It’s spec-first, values-later. Governance gets captured by insiders. DNS, app stores, revocation, and compliance—all become chokepoints. Control without accountability.

And then there’s over-identification. I remember speaking with regulators in the Netherlands who said, “We have three levels of authentication—so let’s just require the highest for everything. Fingerprints, iris scans, the works.” But that kind of thinking can cause real harm.

The Netherlands was one of the most tolerant countries in Europe pre–World War II, but still had one of the highest Holocaust death rates. Around 75% of Jews were killed there, compared to 25% in France, a historically less tolerant country. Why? In large part, because the Netherlands had highly efficient identity systems. And that efficiency was weaponized.

You shouldn’t need to scan your fingerprint just to report a pothole on the road. When you ask people to over-authenticate for simple civic actions, you're turning public participation into a liability.

And we see new forms of this every day. Platforms like Facebook and Google shut off access without notice. You don’t just lose your profile—you lose your email, your ads, your calendar, your business. There’s no human to talk to. No appeal process. No recourse.

The most dangerous systems are the ones that fail quietly. When you start seeing patterns like opaque governance, centralized control, and no way out—that’s when alarm bells should go off. Because if your architecture doesn’t resist coercion or mitigate correlation, it’s not decentralized. It’s just centralization in disguise.

If you were building a privacy-first system today, what would be non-negotiable in its design?

First is data minimization: least privilege, least access, and the idea that access should be deniable. I should be able to say, “I’m not giving you this information.” In fact, there should be social penalties for asking for too much.

People tend to underestimate how dangerous stored, linkable data can be. Maybe you used selective disclosure to prove something once—say, you’re over 18 or you live at a certain address. But if the verifier stores that and links it with other disclosures later, your privacy erodes over time. Good systems make it impossible to reveal more than what’s needed.

The second is progressive trust. And even then, I’m not a fan of the word “trust”—because it implies “I trust you” or “I don’t.” Binary. But that’s not how human trust works.

When we met at the Internet Archive, we had a shared context. You could check who I am, read about my work, see if I’m responsive. That doesn’t mean you immediately hand over sensitive information—but it’s enough to take a small risk. That’s how trust forms: in layers, over time, through presence, participation, and repetition.

The third thing is scale. I get wary when someone says, “This is the solution for the world.”

That’s usually when things break. If you look at Elinor Ostrom’s Nobel-winning work, she says the same thing: You need the proper level of subsidiarity in these systems. Otherwise, you get bad effects, and the commons fail.

What if, instead, you built something for a small group? Say, 70 people in a World of Warcraft guild who need more trust to show up for a raid on time. That’s a completely different beast.

That’s the kind of design I care about. Build small. Build intentionally. And design systems that reflect how people actually live and relate to one another.


Written by terezabizkova | Tech writer/editor based in Colombia. Always curious. 💡
Published by HackerNoon on 2025/04/24