2025.12.10

A = 1

You should only authenticate once in your life

Facebook Messenger has an "unsend" feature. When you delete a message, it gives you two options: "Unsend for You" or "Unsend for Everyone." The default is "Unsend for You"—which leaves the message visible to everyone else while hiding it only from you. The useful option, "Unsend for Everyone," requires an extra confirmation popup.

This is either catastrophically bad UX design or something darker. Why would the default be the useless option? Why add friction to the feature that actually protects privacy?

Because Facebook wants to keep that data. Every time you unsend for everyone, they lose a data point for ad targeting, training AI, and responding to subpoenas. The friction reduces usage of the feature that actually removes data from their servers. It's privacy theater with a dark pattern on top.

Which brings us to authentication.

The 60-Year-Old Mistake

Passwords were invented in 1961 for MIT's CTSS timesharing system. They made sense then: terminals were physical, networks didn't exist, and the threat model was "prevent students from reading each other's files."

Sixty-four years later, we're still using the same model. Type a secret string, get access. Forget the string? Reset it via email. Lose access to email? Reset it via phone number. Lose the phone? Hope customer support believes you.

At every step, your security boundary is "whoever can convince a human or automated system to give them access." That's not authentication. That's social engineering with extra steps.

Current State: Catastrophically Broken

What We Have

  • Passwords reset via email
  • Email reset via phone number
  • Phone number controlled by carrier
  • Carrier employees can port your number
  • "Security questions" use public info
  • Customer support has backdoors
  • 2FA tied to device you can lose
  • Biometrics + password = vulnerable to both

What We Need

  • Authenticate once, cryptographically
  • Identity bound to hardware
  • No reset flows
  • No customer support backdoors
  • No corporate intermediaries
  • Social recovery via trusted humans
  • Proof of unique consciousness
  • Mathematics, not policy

The fundamental problem: authentication count. Every time you log in, you're re-proving your identity. Every time you reset a password, you're trusting a corporation to verify you. Every authentication event is an attack surface.

A = 1. You should authenticate exactly once when you create your identity. After that, cryptographic proofs derived from that single event handle everything.

Hardware-Bound Identity

Modern devices contain dedicated security chips capable of unforgeable cryptographic proofs:

These chips can perform cryptographic operations where keys never leave the hardware. The device becomes the identity. An attacker would need physical access to the unlocked device, not just the ability to intercept network traffic or compromise software.

The Problem: Platform vendors reserve these features for internal services and don't expose them to third-party applications. Apple uses Secure Enclave for FaceID and payment processing but won't let you build apps that access it. Google uses StrongBox for device attestation but limits what you can do with it.

This isn't a technical limitation—it's a business model conflict. If you could build applications that bypass platform identity systems, users might not need Apple ID or Google accounts. The hardware exists and is capable. The vendors just won't give you the keys.

Photon works around these limitations by deriving device-specific keys from hardware identifiers. It's not ideal (hardware oracle access would be better), but it's the best you can do when platform vendors won't cooperate. The keys are deterministic: same device always generates the same keys, but only that specific device can generate them.

Social Recovery

Traditional security questions ask things like "mother's maiden name" or "first pet's name"—information that's often publicly available or easily guessed. Password reset flows let anyone with access to your email or phone number claim your account. This isn't security. It's security theater.

Actual security uses the model humans have used for millennia: trusted relationships.

Your private key is split into encrypted shards and distributed across trusted friends. Lose all your devices? The system notifies your friends: "Alice is recovering her identity." Each friend's device generates a unique 3-word phrase. You contact them out-of-band (phone call, video chat, in person) and verify it's actually you requesting recovery. They read you the phrase, you enter it on your new device.

After the threshold is reached (typically 5 friends), your identity reconstructs cryptographically. No customer support. No corporate verification. No trusting that a phone number belongs to you. Your friends vouch for you because they actually know you.

Your security boundary becomes: the number of trusted contacts who would need to collude to compromise your identity. That's a hell of a lot better than "whoever can intercept your SMS verification code."

The TPM Question

You might be wondering: why doesn't Photon just use the TPM directly?

Because the TPM only needs to do one thing: unseal a master secret. Everything else—key derivation, signing, verification—happens in the application using standard cryptography with that unsealed secret.

The TPM's job is literally:

  1. Store secret sealed to hardware
  2. Only unseal to legitimate application
  3. That's it

All the cryptographic heavy lifting (CLUTCH key ceremony, rolling-chain encryption, social key recovery) happens in software using that unsealed master secret. The TPM is just a hardware-backed key vault with attestation, not an active cryptographic processor.

This is simpler, faster, and more flexible than trying to make the TPM do everything. You're not limited by TPM performance or supported operations. The hardware proves the keys are real; the software does the work.

Why Platform Vendors Won't Help

Apple, Google, and Microsoft have the technology to enable passless authentication. Their devices contain hardware security modules that can generate unforgeable cryptographic proofs. They use these features internally for device attestation, DRM, payment processing, and their own identity systems.

But they won't expose these capabilities to third-party developers because it would enable applications that bypass platform identity systems. If you could build apps where users authenticate once, cryptographically, without needing Apple ID or Google accounts, you'd eliminate the corporate intermediary.

That's not a bug. That's the entire point.

The hardware exists. The mathematics work. The only barrier is business model incompatibility. Platform vendors make money by being the trusted intermediary. Letting you build systems that eliminate trusted intermediaries conflicts with that model.

What A = 1 Actually Means

Authentication count equals one. You authenticate exactly once when you create your identity. After that:

Every subsequent access is a cryptographic proof derived from that initial authentication, not a new authentication event. The system proves continuous possession of the device and the identity bound to it.

This isn't theoretical. Photon implements this today. CLUTCH key ceremony establishes shared secrets. Rolling-chain encryption proves continuous participation. Social key recovery handles device loss. Hardware-derived keys tie identity to physical devices.

The pieces exist. The mathematics work. We just need to stop accepting 60-year-old mistakes as inevitable.

Try It Yourself

Photon is available now. Linux, Windows, macOS, Android. Install it, register a handle, message someone. Experience what authentication should have been from the beginning.

No phone number required. No email address. No password reset flow. Just cryptography and trusted humans.

Get Photon →