Part 1 of this series left off with the question of whether we can verify identity without storing the proof in recoverable form.
Part 2 of this series left us with authentication working—passwords hashed, systems hardened, privileges separated. Users could log in from different terminals. Trust was local and centralized: one system, one administrator, one password file.
Then we connected the systems together.
Part 3 is the story of how we spent the 1980s and 90s trying to recreate centuries of social technology in mathematics, and discovered that bits aren…
Part 1 of this series left off with the question of whether we can verify identity without storing the proof in recoverable form.
Part 2 of this series left us with authentication working—passwords hashed, systems hardened, privileges separated. Users could log in from different terminals. Trust was local and centralized: one system, one administrator, one password file.
Then we connected the systems together.
Part 3 is the story of how we spent the 1980s and 90s trying to recreate centuries of social technology in mathematics, and discovered that bits aren’t wax, trust doesn’t scale, and humans will always route around friction like water around stone.
The Network Era
Networks didn’t create a new problem. They digitized an ancient one.
For millennia, humans faced the same challenge: How do you prove identity to strangers at a distance? The earliest documented solution dates to approximately 450 BC, when Nehemiah, an official serving King Artaxerxes I of Persia, needed to travel to Judea. The king gave him letters "to the governors beyond the river" requesting safe passage through their lands. This is the first recorded credential—a document from a trusted authority, verifying identity to strangers who had no other way to know the bearer.
The pattern repeated across civilizations. Ancient China’s guosuo system required travelers to carry permits through checkpoints along the Silk Road. The Roman Empire issued tractorium letters in the emperor’s name. Medieval Islamic caliphates used bara’a tax receipts as proof of legitimate status. The 18th century traveler arriving in a distant city presented sealed letters of introduction: "Lord So-and-So vouches for this person." The merchant trusts Lord So-and-So, therefore trusts the traveler.
This is transitive trust—the foundation of identity-at-distance. Seals that couldn’t be forged, signatures that proved authorship, credential chains where trust propagated through known relationships. Humanity solved authentication across distance through social technology developed over 2,500 years.
What networks changed wasn’t the ability to log in from different places—time-sharing already allowed that. What changed was the need to authenticate across administrative boundaries. Not different terminals on the same system, but different systems with different administrators, different trust domains, different authority structures.
Before networks: You authenticate to a system that holds your credentials. One authority, one password file, one trust domain.
After networks: You authenticate to a system that doesn’t hold your credentials, or you need credentials that work across multiple systems with different authorities. Like showing an ID issued by one country to authorities in another country—they have to trust the issuer.
The question isn’t "can you log in from different places?" (yes, always could). The question is: "How do strangers verify each other across distances when perfect forgery costs nothing and communication is instant?"
Physical world solutions relied on properties that don’t transfer to digital: Documents that degrade with copying. Seals that visibly break when tampered with. Single instances that can’t be in two places. Human judgment about suspicious behavior.
Digital changes everything. Perfect copies cost nothing. Transmission is instant. Bits are indistinguishable from originals. No degradation, no wear, no physical constraints.
We spent the 1980s and 90s teaching computers the social technology humanity developed over centuries. We discovered that the medium matters more than we thought.
Networks and the Trust Distance Problem
The ARPANET Assumption
ARPANET, the network that became the internet, was built by researchers for researchers who knew each other. TCP/IP—the protocol suite that defines internet communication—is elegant, simple, and stateless. It’s also missing an authentication layer.
This wasn’t an oversight. It was a feature.
In the early 1980s, "the internet" meant dozens of research institutions and hundreds of users. Everyone sort of knew everyone. Academic norms enforced good behavior. Reputation mattered. It was a small town where trust was the default and violations were personal.
The protocols assumed this. Trust was embedded in design, not enforced by mechanism. Like designing a city with no locks because "we’re all friends here."
This works until strangers arrive.
Sun NFS: The Trust Disaster
In 1984, Sun Microsystems released NFS—Network File System. The goal was elegant: share files across networks as easily as across local disks. Remote directories should look exactly like local directories. Simple. Revolutionary. Useful.
The authentication model was... optimistic.
NFS trusted user IDs across systems. If you’re UID 501 on System A, you’re UID 501 on System B. No verification. No credentials. Just trust. The system assumed all connected machines were equally trustworthy, administered by equally responsible people, connected to equally secure networks.
To be fair, NFS was designed for trusted environments—a university department, a corporate lab where everyone knew each other. The problem came when it was deployed more broadly, across environments where that trust didn’t exist.
The result was obvious immediately: change your local UID, become anyone to remote servers.
Identity reduced to absurdity—a 16-bit number you controlled yourself. UID spoofing became trivial. Like travelers writing their own letters of introduction and everyone accepting them at face value.
Why did this happen? The design philosophy valued simplicity and performance. Stateless operation (like HTTP would later). No need for servers to track client state. No overhead for verification. Just trust that systems connecting to you are honest.
The consequences were widespread: data breaches, no accountability, anyone could be anyone. NFS became a textbook example of elegant design meeting terrible security.
But it revealed something fundamental: authentication worked when trust was local. Networks made trust distant, and distance broke everything.
The question wasn’t just technical anymore. It was architectural. How do systems verify each other’s claims? How do you trust remote authentication when you can’t verify the authenticator?
The Morris Worm: When Trust Shattered
November 2, 1988. Robert Tappan Morris—son of Robert Morris, a renowned cryptographer who had worked at Bell Labs and would later become chief scientist at the NSA’s National Computer Security Center—released a worm to measure the internet’s size. A bug caused it to spread uncontrollably. Within hours, it compromised over 6,000 computers. That was roughly 10% of the internet.
"There may be a virus loose on the internet." — Andy Sudduth of Harvard, 34 minutes after midnight, November 3, 1988
📺 For a first-hand account of the chaos, see Computerphile’s "The First Internet Worm (Morris Worm)" featuring Dr. Julian Onions, who was there when it happened.
The attack vectors were all trust-based. The worm exploited UNIX utilities—sendmail, finger, rsh, rexec—that assumed trusted networks. It stole password files that were often weakly hashed. It cracked weak passwords and spread via trusted connections.
The small town had become a big city, but everyone was still acting like everyone was friends.
The realization hit hard: trust assumptions failed catastrophically at scale. The network itself was an attack vector. You couldn’t assume anything about remote systems. You needed authentication that worked across hostile networks, with untrusted intermediaries, when you couldn’t trust the other endpoint.
You needed to verify identity when the only thing you knew for certain was that you couldn’t trust anything.
Recreating Letters of Introduction in Mathematics
The Morris Worm was a wake-up call, but the underlying problem had been recognized years earlier. How do you build systems that work across networks where you can’t trust the machines, the connections, or the people at the other end?
The answer, it turned out, was to go back to basics—to the oldest identity technology humans had ever developed. Letters of introduction. Sealed credentials. Trusted intermediaries who could vouch for strangers. The physical world had solved this problem centuries ago. The question was whether those solutions could be translated into mathematics.
Several groups were already working on it. But the most influential work came from a university that had a pressing practical need: MIT had thousands of students, hundreds of workstations scattered across campus, and a vision of computing where your environment followed you everywhere. They couldn’t wait for the industry to figure it out. They had to build it themselves.
Kerberos and Hesiod: Authentication Meets Directory
MIT’s Project Athena was building something ambitious: a distributed computing environment where students could sit down at any workstation on campus and have their entire environment—files, settings, everything—follow them. As a side note, when I first started working at MIT decades later, one of my team’s responsibilities was maintaining the Athena clusters across campus. NFS’s security nightmare made this terrifying. Anyone could be anyone. Files were unprotected. The question became urgent: How do we verify identity across untrusted systems?
But authentication was only half the problem. Even if you could prove who someone was, you still needed to know *what *they could access. Where was their home directory? What groups did they belong to? What printers could they use? This information had to live somewhere accessible to every machine on the network.
Project Athena solved both problems together, with two interlocking systems:

Wikipedia - Kerberos Protocol Logo
Kerberos handled authentication—proving you are who you claim to be. Named for the three-headed dog guarding the gates of Hades, it was developed through the late 1980s, with a prototype in production by September 1986 and its first public release (Version 4) in January 1989.
Hesiod handled directory lookup—finding information about users, groups, and services. Named for the Greek poet who cataloged the gods, it stored identity information as DNS records, allowing any machine to look up a user’s home directory, group memberships, or mail server.
They worked in concert: When you logged into an Athena workstation, Kerberos authenticated you to the Key Distribution Center. Once authenticated, the system consulted Hesiod to find your home directory location, then used Kerberos again to authenticate to the file server hosting your files. Single password, multiple services, your environment following you everywhere.
The Kerberos model was elegant. Three parties: User, Service, and Key Distribution Center (KDC). The user proves identity to the KDC once. The KDC issues a time-limited "ticket"—like a sealed letter of introduction. The user presents this ticket to services. Services verify the ticket came from the KDC.
The analogy maps cleanly: A trusted lord introduces you to merchants across the city. The merchants trust the lord’s seal. You only prove yourself to the lord once, then carry his introduction to everyone else.
The ticket is a digital letter. Encrypted by the KDC’s secret key—like a wax seal. Contains your identity and expiration time. Can’t be forged without knowing the KDC’s secret. Time-limited, like letters going stale. Single sign-on: authenticate once, access many services.
This combination—Kerberos for authentication, Hesiod for directory—worked beautifully within MIT’s campus. By November 1988, the majority of undergraduates had used the Athena workstations. The architecture proved that centralized identity management could work at scale, within a single administrative domain.
The Boundary Problem
But Athena’s success contained the seeds of its limitation. The system worked because everyone—every workstation, every server, every user—agreed to trust the same KDC. The KDC was the single source of truth for "who is allowed here."
What happens when you need to cross organizational boundaries?
Kerberos supports "cross-realm trust"—two Kerberos realms can agree to honor each other’s tickets. Realm A trusts Realm B, so users from B can access services in A. But establishing this trust requires administrators from both realms to coordinate: they must share a secret key between their KDCs. This works between, say, two departments at the same university. It becomes impractical between thousands of organizations that have no prior relationship.
The problems were both practical and philosophical:
Practically: Kerberos required synchronized clocks across all systems (hard in the 1980s), trusted infrastructure (if someone compromises the KDC, they compromise everything), and cooperative services (every service must implement Kerberos support). Cross-realm trust required manual configuration between every pair of realms that wanted to interoperate.
Philosophically: it required agreement on who to trust. Works within kingdoms. Not between kingdoms. You can’t establish cross-realm trust with a stranger—and on the internet, everyone is a stranger.
Kerberos established a pattern we’d see repeatedly: elegant solution, narrow deployment. Perfect for the problem it was designed to solve (campus-wide single sign-on), but unable to scale to the problem we actually faced (internet-wide identity).
Directory Services: Scaling the Phone Book
Athena’s Hesiod was clever—using DNS as a lightweight directory—but limited. It could store basic user information, but not complex relationships or rich attributes. As networks grew beyond single campuses, organizations needed more sophisticated ways to manage identity information.
The core problem: early networks used flat files. NetWare’s "bindery" was essentially a server-local database listing users and resources for that one machine. Simple. But as networks grew, administrators faced a nightmare: add a user to 50 servers, update 50 separate files. No relationships, no hierarchy, no enterprise-wide view.
Banyan VINES and StreetTalk (1984)
Banyan Systems shipped the first practical enterprise directory service in 1984. StreetTalk was revolutionary—a globally distributed, replicated database that could span continents. Users were named in a three-level hierarchy: user@group@organization. Log in once, access resources anywhere on the network.
The U.S. State Department adopted StreetTalk to link embassies worldwide. Oil companies, utilities, the Department of Defense—organizations that grasped the value of global identity management became early adopters. StreetTalk proved that directory services could work at enterprise scale.
But Banyan was a technology company, not a sales company. For a decade, Novell and Microsoft dismissed directory services as unnecessary complexity. By the time they caught on, Banyan had lost its window.
X.500: The International Standard (1988)
The ITU (International Telecommunication Union) and ISO formalized directory services in the X.500 standard, approved in 1988. X.500 envisioned a global directory—a phone book for the entire networked world. Hierarchical naming: country, organization, organizational unit, person. Distinguished Names like cn=John Smith, ou=Engineering, o=Acme Corp, c=US.
X.500 was comprehensive, elegant, and nearly impossible to deploy. It ran on the OSI protocol stack (not TCP/IP), required heavyweight servers, and involved complex protocols like DAP (Directory Access Protocol). Few organizations could justify the infrastructure.
But X.500’s concepts proved durable: hierarchical trees, distinguished names, schemas defining object types and attributes. These ideas would outlive the original implementation.
Novell Directory Services (1993)
Novell shipped NDS with NetWare 4.0 in 1993, replacing the flat bindery with a true directory service based on X.500 concepts. NDS organized networks into hierarchical trees. Users authenticated to the directory, not individual servers. Single sign-on for the enterprise.
For the mid-1990s, NDS was arguably the most sophisticated directory service available to typical organizations. It gave Novell a six-year head start over Microsoft’s Active Directory (which wouldn’t ship until Windows 2000). As a side note, one of my student worker jobs in college was to maintain some computers that used NDS.
LDAP: Making Directories Accessible (1993)
The breakthrough came from the University of Michigan. Tim Howes, Steve Kille, and Wengyik Yeong published RFC 1487 in July 1993, defining the Lightweight Directory Access Protocol.
LDAP was exactly what its name promised: a lightweight way to access X.500-style directories. It ran over TCP/IP instead of OSI. It simplified the complex DAP protocol into something implementable on ordinary computers. Desktop machines could finally query enterprise directories.
The University of Michigan released the first native LDAP server (SLAPD) in 1995. By 1997, LDAP v3 had become an Internet standard. Netscape, Novell, Oracle, and eventually Microsoft all adopted it. LDAP became the lingua franca of directory services—the protocol that made enterprise identity management practical.
The Enterprise Identity Stack
While MIT was building Kerberos, Microsoft was solving the same problem differently. Windows NT (released in 1993) introduced NTLM—NT LAN Manager—a proprietary authentication protocol using challenge-response. When you logged into a Windows NT domain, the server sent a random challenge; your computer hashed your password with that challenge and sent back the result. The server, which knew your password hash, verified the response.
NTLM worked, but it had problems. The password hashes were vulnerable—an attacker who stole them could authenticate without knowing the actual password (the "pass-the-hash" attack, documented by 1997). No mutual authentication—you proved yourself to the server, but the server didn’t prove itself to you. And it was proprietary—Microsoft-only, limiting interoperability.
By the late 1990s, Microsoft made a strategic decision: adopt Kerberos. Windows 2000’s Active Directory combined the best of both worlds: Kerberos for authentication (replacing NTLM as the default), LDAP for directory access, DNS for service location. Log into a Windows domain once, and you could access file servers, email, printers, applications—all without re-entering credentials. The directory knew who you were, what groups you belonged to, what you could access.
This was the fulfillment of the Athena vision, productized for the enterprise. It worked—spectacularly well—within organizational boundaries. Active Directory became the backbone of corporate identity, and remains so today.
(NTLM didn’t disappear—it remained as a fallback for legacy systems and workgroup authentication. Decades later, it’s still lurking in enterprise networks, still vulnerable, still a target for attackers.)
But the boundary problem persisted.
Active Directory domains can establish trusts with other domains, just like Kerberos realms. Within a single company, you can have multiple domains that trust each other—a "forest" of interconnected directories. Users in one domain can access resources in another.
Between companies? Much harder. You need formal agreements, shared secrets, coordinated administration. The trust is technically possible but organizationally impractical. You can federate a few close partners. You cannot federate thousands of strangers.
The Pattern Crystallizes
By the mid-1990s, enterprise identity had a clear architecture:
- Directory services (LDAP): Store identity information—who exists, what groups they belong to, what attributes they have
- Authentication (Kerberos): Verify identity—prove that someone is who they claim to be
- Authorization: Determine access—given a verified identity, what can they do?
These worked together within organizations. The directory stored the identity data. Kerberos verified users against that data. Applications checked the directory for permissions.
But all of this required everyone to be part of the same system. Same directory. Same KDC. Same administrative domain. Same trust boundary.
The internet was none of those things.
The deeper problem remained: both Kerberos and directory-based authentication relied on shared secrets. The KDC and services shared symmetric keys. Directories stored password hashes. You had to establish trust before you could verify identity—but how do you establish trust with a stranger?
The chicken-and-egg problem seemed insoluble. Until mathematics offered an escape.
The Cryptographic Revolution
Public Key Cryptography: The Mathematical Miracle
The invention happened in the 1970s. Whitfield Diffie and Martin Hellman published their key exchange protocol in 1976. Ron Rivest, Adi Shamir, and Leonard Adleman published RSA in 1977. Both were mathematical breakthroughs: you could prove knowledge of a secret without revealing the secret—and without the verifier needing to know it in advance.
But knowing something is possible and deploying it at scale are different problems.
By the late 1980s, computers were finally fast enough to make public key crypto practical. Better implementations, optimized algorithms, careful engineering. Still expensive, but feasible.
For identity, this changed everything.
Two keys: one public (anyone can have it), one private (only you have it). You can verify my identity using my public key. But only I can prove I’m me using my private key. The verifier doesn’t need my secret. They just need my public key, which I can broadcast to the world.
The physical analogy: a seal on a letter of introduction. Anyone can verify the seal is authentic—just look at the wax, check the crest matches. But only the person with the signet ring can create that seal.
Digital signatures work the same way, just with mathematics instead of wax. I sign a message with my private key. You verify it with my public key. If the verification succeeds, you know I signed it—nobody else could have, because nobody else has my private key.
This solved the chicken-and-egg problem. No pre-shared secrets needed. No trusted third party required for the cryptographic verification itself. Strangers could verify each other’s identity through mathematics alone.
But a new question emerged: whose public key is this, really?
I can verify a signature matches a public key. But how do I know that public key belongs to the person I think it does? The math is perfect; the human problem remains.
This question—how to bind public keys to real-world identities—would become the central challenge of the next decade. But before we could answer it, the world changed.
The Web Changes Everything
From Research Network to Global Platform
In 1991, Tim Berners-Lee released the World Wide Web. What had been a network of universities and research labs was about to become something else entirely.
The timing matters. The enterprise identity systems we just discussed—Kerberos, LDAP, Active Directory—were designed for closed environments. Everyone in the same organization. Administrators who could configure every machine. Users who had been vetted and given accounts. Trust boundaries that matched organizational boundaries.
The web demolished those assumptions.
HTTP—HyperText Transfer Protocol—was designed for sharing documents. Simple, stateless, elegant. Every request independent. No memory of previous interactions. Authentication was an afterthought. Literally. "We’ll add that later."
At first, this didn’t matter. The early web was academics sharing research papers. Who cared about identity? The documents were public anyway.
Then came 1993: the Mosaic browser made the web accessible to ordinary people. Then 1994: the first commercial transactions. Pizza Hut took online orders. A Sting CD was sold. The internet banking experiments began.
Suddenly, identity mattered enormously—but in a completely different way than before.
The New Identity Problem
Enterprise identity asked: "Is this person authorized to access our internal resources?"
Web commerce asked something different: "Is this stranger trustworthy enough to do business with?"
The first question assumed a closed community with pre-existing relationships. The second assumed no prior relationship at all. Complete strangers, meeting for the first time, needing to trust each other with money.
This was the cross-boundary problem that Kerberos couldn’t solve—but now multiplied by millions. Not just "how do MIT and Stanford trust each other" but "how does a consumer in Ohio trust a merchant in California they’ve never heard of?"
Two different answers emerged, reflecting two different philosophies about how trust should work in this new world.
PGP: The Cypherpunk Answer
In 1991—the same year as the web—Phil Zimmermann released PGP, "Pretty Good Privacy." It was encryption for everyone. Strong crypto that ordinary people could use. Privacy as human right, not privilege.
"When the United States Constitution was framed, the Founding Fathers saw no need to explicitly spell out the right to a private conversation. That would have been silly. 200 years ago, all conversations were private. If someone else was within earshot, you could just go out behind the barn and have your conversation there. The right to a private conversation was a natural right, not just in a philosophical sense, but in a law-of-physics sense, given the technology of the time." — Phil Zimmermann, "Why I Wrote PGP" (1991)
The U.S. government classified strong cryptography as a munition. Export required licenses. Zimmermann faced a federal investigation that lasted three years for making PGP available—when it spread internationally via the internet, the government argued he had violated the Arms Export Control Act. He published the source code as a book—protected by First Amendment—then it was scanned back in overseas. Malicious compliance as civil disobedience. The investigation was finally dropped in early 1996 without charges.
🎙️ For the full story of the Crypto Wars, listen to Darknet Diaries Episode 12: "Crypto Wars", featuring interviews with key figures who fought for the right to encryption.
PGP’s answer to "whose public key is this?" was the web of trust—decentralized credential verification without central authorities. I vouch for Alice. Alice vouches for Bob. Bob vouches for Carol. Trust propagates through social connections.
If I trust you and you trust them, maybe I trust them too. Transitive trust through cryptographic signatures. No king needed. No central authority. No single point of failure.
This was revolutionary politically and technically. It democratized strong encryption. Challenged government crypto monopoly. Embodied the cypherpunk manifesto: privacy through mathematics, freedom through code.
It also didn’t scale.
Web of trust requires social connections. Trust chains get too long (friend of friend of friend of friend...). No clear revocation mechanism. Hard to determine trustworthiness of strangers. Works in communities, fails with randos on the internet.
Like physical letters of introduction: great within social circles, useless outside them.
For activism, journalism, dissidents—web of trust worked. For commerce with strangers—it was impractical. You couldn’t expect a consumer to build a trust chain to every online merchant.
PKI and SSL: The Commerce Answer
Commerce needed a different answer. When you connected to a bank’s website, you weren’t trying to verify a personthrough social connections. You were trying to verify an organization—a legal entity you’d never met. Is this really Bank of America, or someone pretending to be?
This was an institutional identity problem. Not "is this my friend Alice?" but "is this the legal entity registered as Bank of America Corporation?" The first is a social question answered by social networks. The second is an institutional question answered by... what?
The answer: Certificate Authorities. Public Key Infrastructure (PKI) with centralized authorities that verify organizational identity and issue digital credentials. X.509 certificates in standardized format. Like passports—government-recognized authorities everyone agrees to trust.
Here’s how it maps to identity:
- An organization wants to prove its identity online (say, a bank wanting customers to trust its website)
- A Certificate Authority verifies that identity—checks business registration, domain ownership, sometimes physical address and legal documents
- The CA issues a certificate binding the organization’s identity to its public key
- The CA signs this certificate with their own private key
- Anyone who trusts the CA can now verify: "Yes, this public key really belongs to Bank of America"
The certificate is a digital identity document. It contains a name (the organization), a public key (how to verify their signatures), an issuer (who vouches for them), and an expiration date.
Netscape built this into their browser with SSL (Secure Sockets Layer), developed starting in 1994 with SSL 2.0 released in 1995. When your browser connected to a secure website, it checked: Does this server have a valid certificate? Is it signed by a CA I trust? Does the name match where I think I’m connecting?
If yes: the padlock appeared. You’re talking to who you think you’re talking to. Safe to enter your credit card.
If no: scary warnings. Someone might be impersonating the site.
This is identity verification for the commercial web—institutional rather than personal, centralized rather than social. The CA vouches: "We verified this entity’s identity. This public key belongs to them." A letter of introduction from a trusted authority, exactly the pattern from Part 1, but with corporations and algorithms instead of lords and wax seals.
The Trust Hierarchy
PKI created trust hierarchies. Root CAs at the top (VeriSign, Thawte, later many others). Intermediate CAs below them. End entities at the bottom—websites, organizations. Chain of trust: verify the bottom by checking signatures all the way up to a trusted root.
This solved the scaling problem that defeated PGP’s web of trust. You don’t need to know someone who knows someone who knows the bank. You just need to trust the CA that verified them. Centralized enough to work globally. Distributed enough to not collapse under its own weight. Standardized enough to enable commerce.
But it recreated the trust regression problem in digital form.
Who authenticates the authenticators? How do we know VeriSign actually verified that identity properly? Who watches the CAs?
The answer: browsers ship with pre-trusted root certificates. A list of CAs that the browser vendor decided to trust. Your browser comes with 50-100 pre-trusted roots.
Who decided which roots to trust? Browser vendors. Netscape, Microsoft, later Mozilla, Google, Apple.
Who verifies them? ...We just trust them.
Eventually you hit bedrock. At some point, trust is axiomatic. You have to trust something. The question is: what’s worth trusting, and what happens when that trust is misplaced?
PKI bet on centralized authorities with economic and legal accountability. Not perfect, but pragmatic. Like government monopoly on identity documents—flawed system, but better than chaos.
We were recreating government monopoly on identity in digital space. Just with corporations instead of states. The identity question—"who is this entity?"—was being answered by a small number of commercial certificate authorities, whose business model depended on being trusted.
The Infrastructure Is Built
By the mid-1990s, the infrastructure for digital identity existed. We had the pieces:
For enterprises: Kerberos for authentication, LDAP directories for identity storage, the beginnings of single sign-on within organizational boundaries. Active Directory would soon bring it all together.
For individuals: PGP and the web of trust—decentralized, privacy-preserving, mathematically elegant. Perfect for activists, journalists, cypherpunks. Impractical for everyone else.
For commerce: PKI and certificate authorities—centralized, scalable, standardized. Banks could prove they were banks. Customers could trust the padlock.
Three solutions for three different problems. None of them solved the problem that was about to explode: how does a website know who you are?
The web had identity for servers (certificates). It had no identity for users. When you visited Amazon or eBay or your bank, they could prove who they were. But how did they know who you were?
The answer, improvised in haste and never replaced: passwords. Lots of passwords. A different password for every site. The most ancient form of authentication, replicated millions of times across millions of sites, with no coordination, no standards, no way out.
The infrastructure was built. The problems were about to begin.
Next: Part 4 - The Password Crisis and the Rise of Continuous Authentication.
Further Reading:
Academic Papers & Books
- Steiner, J., Neuman, C., & Schiller, J. (1988). Kerberos: An Authentication Service for Open Network Systems. USENIX Conference Proceedings.
- Diffie, W., & Hellman, M. (1976). New Directions in Cryptography. IEEE Transactions on Information Theory.
- Garfinkel, S. (1995). PGP: Pretty Good Privacy. O’Reilly Media.
- Spafford, E. (1989). The Internet Worm Program: An Analysis. Purdue Technical Report CSD-TR-823.
- Rescorla, E. (2001). SSL and TLS: Designing and Building Secure Systems. Addison-Wesley.
RFCs (Request for Comments)
Kerberos
- RFC 4120: The Kerberos Network Authentication Service (V5) — The current standard for Kerberos, obsoleting RFC 1510
LDAP & Directory Services
- RFC 1487: Lightweight Directory Access Protocol — The original 1993 LDAP specification by Howes, Kille, & Yeong
- RFC 2251: Lightweight Directory Access Protocol (v3) — The 1997 LDAPv3 standard (now obsoleted by RFC 4511)
- RFC 4511: LDAP: The Protocol — Current LDAP specification
SSL/TLS
- RFC 2246: The TLS Protocol Version 1.0 — First TLS standard (1999), based on Netscape’s SSL 3.0
PKI & Certificates
- RFC 5280: Internet X.509 Public Key Infrastructure Certificate and CRL Profile — The current PKIX certificate standard
ITU-T Standards
X.500 Series (Directory Services)
- ITU-T X.500: The Directory — Overview of concepts, models and services (first approved 1988)
- ITU-T X.509: Public-key and attribute certificate frameworks — The standard format for PKI certificates (first approved 1988)
Video & Audio Resources
- 📺 Computerphile: "The First Internet Worm (Morris Worm)" — First-hand account from Dr. Julian Onions
- 🎙️ Darknet Diaries: "Ep 12: Crypto Wars" — The battle for encryption rights in the 1990s
- 📺Hidden Heroes: "The Crypto Wars: How Philip Zimmermann Fought for Our Right to Privacy" — Documentary on PGP’s creator
- 📺MIT Project Athena: A Computer-Aided Teaching System (1984)