What is Nostr?
hodlbod /
npub1jlr…ynqn
2025-03-05 18:11:08

hodlbod on Nostr: For the last several months I've been working to develop patterns for using keys with ...

For the last several months I've been working to develop patterns for using keys with cross-platform applications like Flotilla in a way that doesn't scare people off. This is crucial for achieving my stated goal of creating software that serves the kind of people who do not know what a public key is.

There has been plenty of work in this area by other people which has brought this goal slightly nearer for me. In particular, I think`ncryptsec` for backups and good mobile signers like Amber are the cleanest solutions out there so far, and multi-sig key storage protocols like bifrost and promenade promise to substantially strengthen security without increasing the burden of new concepts on users.

Unfortunately, none of these solutions remove the need for users to learn what keys are. Users are going to have to be able to use keys, and in order to do that they will need to develop a mental model for what they are and why they matter. If they don't understand what makes keys important, they won't understand why traditional custodial account management is a problem.
To that end, this blog post is my best attempt (so far) at explaining Why Keys Matter.

So you've decided to join nostr! Some wide-eyed fanatic has convinced you that the "sun shines every day on the birds and the bees and the cigarette trees" in a magical land of decentralized, censorship-resistant freedom of speech - and it's waiting just over the next hill.

But your experience has not been all you hoped. Before you've even had a chance to upload your AI-generated cyberpunk avatar or make up exploit codenames for your pseudonym's bio, you've been confronted with a new concept that has left you completely nonplussed.

It doesn't help that this new idea might be called by any number of strange names. You may have been asked to "paste your nsec", "generate a private key", "enter your seed words", "connect with a bunker", "sign in with extension", or even "generate entropy". Sorry about that.

All these terms are really referring to one concept under many different names: that of "cryptographic identity".

Now, you may have noticed that I just introduced yet another new term which explains exactly nothing. You're absolutely correct. And now I'm going to proceed to ignore your complaints and talk about something completely different. But bear with me, because the juice is worth the squeeze.

# Identity

What is identity? There are many philosophical, political, or technical answers to this question, but for our purposes it's probably best to think of it this way:

> Identity is the essence of a thing. Identity separates one thing from all others, and is itself indivisible.

This definition has three parts:

- Identity is "essential": a thing can change, but its identity cannot. I might re-paint my house, replace its components, sell it, or even burn it down, but its identity as something that can be referred to - "this house" - is durable, even outside the boundaries of its own physical existence.
- Identity is a unit: you can't break an identity into multiple parts. A thing might be _composed_ of multiple parts, but that's only incidental to the identity of a thing, which is a _concept_, not a material thing.
- Identity is distinct: identity is what separates one thing from all others - the concept of an apple can't be mixed with that of an orange; the two ideas are distinct. In the same way, a single concrete apple is distinct in identity from another - even if the component parts of the apple decompose into compost used to grow more apples.

Identity is not a physical thing, but a metaphysical thing. Or, in simpler terms, identity is a "concept".

I (or someone more qualified) could at this point launch into a Scholastic tangent on what "is" is, but that is, fortunately, not necessary here. The kind of identities I want to focus on here are not our _actual_ identities as people, but entirely _fictional_ identities that we use to extend our agency into the digital world.

Think of it this way - your bank login does not represent _you_ as a complete person. It only represents the _access granted to you_ by the bank. This access is in fact an _entirely new identity_ that has been associated with you, and is limited in what it's useful for.

Other examples of fictional identities include:

- The country you live in
- Your social media persona
- Your mortgage
- Geographical coordinates
- A moment in time
- A chess piece

Some of these identites are inert, for example points in space and time. Other identies have agency and so are able to act in the world - even as fictional concepts. In order to do this, they must "authenticate" themselves (which means "to prove they are real"), and act within a system of established rules.

For example, your D&D character exists only within the collective fiction of your D&D group, and can do anything the rules say. Its identity is authenticated simply by your claim as a member of the group that your character in fact exists. Similarly, a lawyer must prove they are a member of the Bar Association before they are allowed to practice law within that collective fiction.

"Cryptographic identity" is simply another way of authenticating a fictional identity within a given system. As we'll see, it has some interesting attributes that set it apart from things like a library card or your latitude and longitude. Before we get there though, let's look in more detail at how identities are authenticated.

# Certificates

Merriam-Webster defines the verb "certify" as meaning "to attest authoritatively". A "certificate" is just a fancy way of saying "because I said so". Certificates are issued by a "certificate authority", someone who has the authority to "say so". Examples include your boss, your mom, or the Pope.

This method of authentication is how almost every institution authenticates the people who associate with it. Colleges issue student ID cards, governments issue passports, and websites allow you to "register an account".

In every case mentioned above, the "authority" creates a closed system in which a document (aka a "certificate") is issued which serves as a claim to a given identity. When someone wants to access some privileged service, location, or information, they present their certificate. The authority then validates it and grants or denies access. In the case of an international airport, the certificate is a little book printed with fancy inks. In the case of a login page, the certificate is a username and password combination.

This pattern for authentication is ubiquitous, and has some very important implications.

First of all, certified authentication implies that the issuer of the certificate has the right to _exclusive control_ of any identity it issues. This identity can be revoked at any time, or its permissions may change. Your social credit score may drop arbitrarily, or money might disappear from your account. When dealing with certificate authorities, you have no inherent rights.

Second, certified authentication depends on the certificate authority continuing to exist. If you store your stuff at a storage facility but the company running it goes out of business, your stuff might disappear along with it.

Usually, authentication via certificate authority works pretty well, since an appeal can always be made to a higher authority (nature, God, the government, etc). Authorities also can't generally dictate their terms with impunity without losing their customers, alienating their constituents, or provoking revolt. But it's also true that certification by authority creates an incentive structure that frequently leads to abuse - arbitrary deplatforming is increasingly common, and the bigger the certificate authority, the less recourse the certificate holder (or "subject") has.

Certificates also put the issuer in a position to intermediate relationships that wouldn't otherwise be subject to their authority. This might take the form of selling user attention to advertisers, taking a cut of financial transactions, or selling surveillance data to third parties.

Proliferation of certificate authorities is not a solution to these problems. Websites and apps frequently often offer multiple "social sign-in" options, allowing their users to choose which certificate authority to appeal to. But this only piles more value into the social platform that issues the certificate - not only can Google shut down your email inbox, they can revoke your ability to log in to every website you used their identity provider to get into.

In every case, certificate issuance results in an asymmetrical power dynamic, where the issuer is able to exert significant control over the certificate holder, even in areas unrelated to the original pretext for the relationship between parties.

# Self-Certification

But what if we could reverse this power dynamic? What if individuals could issue their own certificates and force institutions to accept them?

![I can do what I want]( )

Ron Swanson's counterexample notwithstanding, there's a reason I can't simply write myself a parking permit and slip it under the windshield wiper. Questions about voluntary submission to legitimate authorities aside, the fact is that we don't have the power to act without impunity - just like any other certificate authority, we have to prove our claims either by the exercise of raw power or by appeal to a higher authority.

So the question becomes: which higher authority can we appeal to in order to issue our own certificates within a given system of identity?

The obvious answer here is to go straight to the top and ask God himself to back our claim to self-sovereignty. However, that's not how he normally works - there's a reason they call direct acts of God "miracles". In fact, Romans 13:1 explicitly says that "the authorities that exist have been appointed by God". God has structured the universe in such a way that we must appeal to the deputies he has put in place to govern various parts of the world.

Another tempting appeal might be to nature - i.e. the material world. This is the realm in which we most frequently have the experience of "self-authenticating" identities. For example, a gold coin can be authenticated by biting it or by burning it with acid. If it quacks like a duck, walks like a duck, and looks like a duck, then it probably is a duck.

In most cases however, the ability to authenticate using physical claims depends on physical access, and so appeals to physical reality have major limitations when it comes to the digital world. Captchas, selfies and other similar tricks are often used to bridge the physical world into the digital, but these are increasingly easy to forge, and hard to verify.

There are exceptions to this rule - an example of self-certification that makes its appeal to the physical world is that of a signature. Signatures are hard to forge - an incredible amount of data is encoded in physical signatures, from strength, to illnesses, to upbringing, to [personality](https://en.wikipedia.org/wiki/Graphology). These can even be scanned and used within the digital world as well. Even today, most contracts are sealed with some simulacrum of a physical signature. Of course, this custom is quickly becoming a mere historical curiosity, since the very act of digitizing a signature makes it trivially forgeable.

So: transcendent reality is too remote to subtantiate our claims, and the material world is too limited to work within the world of information. There is another aspect of reality remaining that we might appeal to: information itself.

Physical signatures authenticate physical identities by encoding unique physical data into an easily recognizable artifact. To transpose this idea to the realm of information, a "digital signature" might authenticate "digital identities" by encoding unique "digital data" into an easily recognizable artifact.

Unfortunately, in the digital world we have the additional challenge that the artifact itself can be copied, undermining any claim to legitimacy. We need something that can be easily verified _and unforgeable_.

# Digital Signatures

In fact such a thing does exist, but calling it a "digital signature" obscures more than it reveals. We might just as well call the thing we're looking for a "digital fingerprint", or a "digital electroencephalogram". Just keep that in mind as we work our way towards defining the term - we are not looking for something _looks like a physical signature_, but for something that _does the same thing as_ a physical signature, in that it allows us to issue ourselves a credential that must be accepted by others by encoding privileged information into a recognizable, unforgeable artifact.

With that, let's get into the weeds.

An important idea in computer science is that of a "function". A function is a sort of information machine that converts data from one form to another. One example is the idea of "incrementing" a number. If you increment 1, you get 2. If you increment 2, you get 3. Incrementing can be reversed, by creating a complementary function that instead subtracts 1 from a number.

A "one-way function" is a function that can't be reversed. A good example of a one-way function is integer rounding. If you round a number and get `5`, what number did you begin with? It's impossible to know - 5.1, 4.81, 5.332794, in fact an infinite number of numbers can be rounded to the number `5`. These numbers can also be infinitely long - for example rounding PI to the nearest integer results in the number `3`.

A real-life example of a useful one-way function is `sha256`. This function is a member of a family of one-way functions called "hash functions". You can feed as much data as you like into `sha256`, and you will always get 256 bits of information out. Hash functions are especially useful because collisions between outputs are very rare - even if you change a single bit in a huge pile of data, you're almost certainly going to get a different output.

Taking this a step further, there is a whole family of cryptographic one-way "trapdoor" functions that act similarly to hash functions, but which maintain a specific mathematical relationship between the input and the output which allows the input/output pair to be used in a variety of useful applications. For example, in Elliptic Curve Cryptography, scalar multiplication on an elliptic curve is used to derive the output.

"Ok", you say, "that's all completely clear and lucidly explained" (thank you). "But what goes _into_ the function?" You might expect that because of our analogy to physical signatures we would have to gather an incredible amount of digital information to cram into our cryptographic trapdoor function, mashing together bank statements, a record of our heartbeat, brain waves and cellular respiration. Well, we _could_ do it that way (maybe), but there's actually a _much_ simpler solution.

Let's play a quick game. What number am I thinking of? Wrong, it's 82,749,283,929,834. Good guess though.

The reason we use signatures to authenticate our identity in the physical world is not because they're backed by a lot of implicit physical information, but because they're hard to forge and easy to validate. Even so, there is a lot of variation in a single person's signature, even from one moment to the next.

Trapdoor functions solve the validation problem - it's trivially simple to compare one 256-bit number to another. And randomness solves the problem of forgeability.

Now, randomness (A.K.A. "entropy") is actually kind of hard to generate. Random numbers that don't have enough "noise" in them are known as "pseudo-random numbers", and are weirdly easy to guess. This is why Cloudflare uses a video stream of their [giant wall of lava lamps](https://blog.cloudflare.com/randomness-101-lavarand-in-production/) to feed the random number generator that powers their CDN. For our purposes though, we can just imagine that our random numbers come from rolling a bunch of dice.

To recap, we can get a digital equivalent of a physical signature (or fingerprint, etc) by 1. coming up with a random number, and 2. feeding it into our chosen trapdoor function. The random number is called the "private" part. The output of the trapdoor function is called the "public" part. These two halves are often called "keys", hence the terms "public key" and "private key".

And now we come full circle - remember about 37 years ago when I introduced the term "cryptographic identity"? Well, we've finally arrived at the point where I explain what that actually is.

A "cryptographic identity" is _identified_ by a public key, and _authenticated_ by the ability to prove that you know the private key.

Notice that I didn't say "authenticated by the private key". If you had to reveal the private key in order to prove you know it, you could only authenticate a public key once without losing exclusive control of the key. But cryptographic identities can be authenticated any number of times because the certification is an _algorithm_ that only someone who knows the private key can execute.

This is the super power that trapdoor functions have that hash functions don't. Within certain cryptosystems, it is possible to mix additional data with your private key to get yet another number in such a way that someone else who only knows the public key can _prove_ that you know the private key.

For example, if my secret number is `12`, and someone tells me the number `37`, I can "combine" the two by adding them together and returning the number `49`. This "proves" that my secret number is `12`. Of course, addition is not a trapdoor function, so it's trivially easy to reverse, which is why cryptography is its own field of knowledge.

# What's it for?

If I haven't completely lost you yet, you might be wondering why this matters. Who cares if I can prove that I made up a random number?

To answer this, let's consider a simple example: that of public social media posts.

Most social media platforms function by issuing credentials and verifying them based on their internal database. When you log in to your Twitter (ok, fine, X) account, you provide X with a phone number (or email) and password. X compares these records to the ones stored in the database when you created your account, and if they match they let you "log in" by issuing yet another credential, called a "session key".

Next, when you "say" something on X, you pass along your session key and your tweet to X's servers. They check that the session key is legit, and if it is they associate your tweet with your account's identity. Later, when someone wants to see the tweet, X vouches for the fact that you created it by saying "trust me" and displaying your name next to the tweet.

In other words, X creates and controls your identity, but they let you use it as long as you can prove that you know the secret that you agreed on when you registered (by giving it to them every time).

Now pretend that X gets bought by someone _even more evil_ than Elon Musk (if such a thing can be imagined). The new owner now has the ability to control _your_ identity, potentially making it say things that you didn't actually say. Someone could be completely banned from the platform, but their account could be made to continue saying whatever the owner of the platform wanted.

In reality, such a breach of trust would quickly result in a complete loss of credibility for the platform, which is why this kind of thing doesn't happen (at least, not that we know of).

But there are other ways of exploiting this system, most notably by censoring speech. As often happens, platforms are able to confiscate user identities, leaving the tenant no recourse except to appeal to the platform itself (or the government, but that doesn't seem to happen for some reason - probably due to some legalese in social platforms' terms of use). The user has to start completely from scratch, either on the same platform or another.

Now suppose that when you signed up for X instead of simply telling X your password you made up a random number and provided a cryptographic proof to X along with your public key. When you're ready to tweet (there's no need to issue a session key, or even to store your public key in their database) you would again prove your ownership of that key with a new piece of data. X could then publish that tweet or not, along with the same proof you provided that it really came from you.

What X _can't_ do in this system is pretend you said something you didn't, because they _don't know your private key_.

X also wouldn't be able to deplatform you as effectively either. While they could choose to ban you from their website and refuse to serve your tweets, they don't control your identity. There's nothing they can do to prevent you from re-using it on another platform. Plus, if the system was set up in such a way that other users followed your key instead of an ID made up by X, you could switch platforms and _keep your followers_. In the same way, it would also be possible to keep a copy of all your tweets in your own database, since their authenticity is determined by _your_ digital signature, not X's "because I say so".

This new power is not just limited to social media either. Here are some other examples of ways that self-issued cryptographic identites transform the power dynamic inherent in digital platforms:

- Banks sometimes freeze accounts or confiscate funds. If your money was stored in a system based on self-issued cryptographic keys rather than custodians, banks would not be able to keep you from accessing or moving your funds. This system exists, and it's called [bitcoin](https://bitcoin.rocks/).
- Identity theft happens when your identifying information is stolen and used to take out a loan in your name, and without your consent. The reason this is so common is because your credentials are not cryptographic - your name, address, and social security number can only be authenticated by being shared, and they are shared so often and with so many counterparties that they frequently end up in data breaches. If credit checks were authenticated by self-issued cryptographic keys, identity theft would cease to exist (unless your private key itself got stolen).
- Cryptographic keys allow credential issuers to protect their subjects' privacy better too. Instead of showing your ID (including your home address, birth date, height, weight, etc), the DMV could sign a message asserting that the holder of a given public key indeed over 21. The liquor store could then validate that claim, and your ownership of the named key, without knowing anything more about you. [Zero-knowledge](https://en.wikipedia.org/wiki/Non-interactive_zero-knowledge_proof) proofs take this a step further.

In each of these cases, the interests of the property owner, loan seeker, or customer are elevated over the interests of those who might seek to control their assets, exploit their hard work, or surveil their activity. Just as with personal privacy, freedom of speech, and Second Amendment rights the individual case is rarely decisive, but in the aggregate realigned incentives can tip the scale in favor of freedom.

# Objections

Now, there are some drawbacks to digital signatures. Systems that rely on digital signatures are frequently less forgiving of errors than their custodial counterparts, and many of their strengths have corresponding weaknesses. Part of this is because people haven't yet developed an intuition for how to use cryptographic identities, and the tools for managing them are still being designed. Other aspects can be mitigated through judicious use of keys fit to the problems they are being used to solve.

Below I'll articulate some of these concerns, and explore ways in which they might be mitigated over time.

## Key Storage

Keeping secrets is hard. "A lie can travel halfway around the world before the truth can get its boots on", and the same goes for gossip. Key storage has become increasingly important as more of our lives move online, to the extent that password managers have become almost a requirement for keeping track of our digital lives. But even with good password management, credentials frequently end up for sale on the dark web as a consequence of poorly secured infrastructure.

Apart from the fact that all of this is an argument _for_ cryptographic identities (since keys are shared with far fewer parties), it's also true that the danger of losing a cryptographic key is severe, especially if that key is used in multiple places. Instead of hackers stealing your Facebook password, they might end up with access to all your other social media accounts too!

Keys should be treated with the utmost care. Using password managers is a good start, but very valuable keys should be stored even more securely - for example in a [hardware signing device](https://nostrsigningdevice.com/). This is a hassle, and something additional to learn, but is an indispensable part of taking advantage of the benefits associated with cryptographic identity.

There are ways to lessen the impact of lost or stolen secrets, however. Lots of different techniques exist for structuring key systems in such a way that keys can be protected, invalidated, or limited. Here are a few:

- [Hierarchical Deterministic Keys](https://www.ietf.org/archive/id/draft-dijkhuis-cfrg-hdkeys-02.html) allow for the creation of a single root key from which multiple child keys can be generated. These keys are hard to link to the parent, which provides additional privacy, but this link can also be proven when necessary. One limitation is that the identity system has to be designed with HD keys in mind.
- [Key Rotation](https://crypto.stackexchange.com/questions/41796/whats-the-purpose-of-key-rotation) allows keys to become expendable. Additional credentials might be attached to a key, allowing the holder to prove they have the right to rotate the key. Social attestations can help with the process as well if the key is embedded in a web of trust.
- Remote Signing is a technique for storing a key on one device, but using it on another. This might take the form of signing using a hardware wallet and transferring an SD card to your computer for broadcasting, or using a mobile app like [Amber](https://github.com/greenart7c3/Amber) to manage sessions with different applications.
- [Key](https://github.com/coracle-social/promenade/tree/master) [sharding](https://www.frostr.org/) takes this to another level by breaking a single key into multiple pieces and storing them separately. A coordinator can then be used to collaboratively sign messages without sharing key material. This dramatically reduces the ability of an attacker to steal a complete key.

## Multi-Factor Authentication

One method for helping users secure their accounts that is becoming increasingly common is "multi-factor authentication". Instead of just providing your email and password, platforms send a one-time use code to your phone number or email, or use "time-based one time passwords" which are stored in a password manager or on a hardware device.

Again, MFA is a solution to a problem inherent in account-based authentication which would not be nearly so prevalent in a cryptographic identity system. Still, theft of keys does happen, and so MFA would be an important improvement - if not for an extra layer of authentication, then as a basis for key rotation.

In a sense, MFA is already being researched - key shards is one way of creating multiple credentials from a single key. However, this doesn't address the issue of key rotation, especially when an identity is tied to the public key that corresponds to a given private key. There are two possible solutions to this problem:

- Introduce a naming system. This would allow identities to use a durable name, assigning it to different keys over time. The downside is that this would require the introduction of either centralized naming authorities (back to the old model), or a blockchain in order to solve [Zooko's trilemma](https://en.wikipedia.org/wiki/Zooko%27s_triangle).
- Establish a chain of keys. This would require a given key to name a successor key in advance and self-invalidate, or some other process like social recovery to invalidate an old key and assign the identity to a new one. This also would significantly increase the complexity of validating messages and associating them with a given identity.

Both solutions are workable, but introduce a lot of complexity that could cause more trouble than it's worth, depending on the identity system we're talking about.

## Surveillance

One of the nice qualities that systems based on cryptographic identities have is that digitally signed data can be passed through any number of untrusted systems and emerge intact. This ability to resist tampering makes it possible to broadcast signed data more widely than would otherwise be the case in a system that relies on a custodian to authenticate information.

The downside of this is that more untrusted systems have access to data. And if information is broadcast publicly, anyone can get access to it.

This problem is compounded by re-use of cryptographic identities across multiple contexts. A benefit of self-issued credentials is that it becomes possible to bring everything attached to your identity with you, including social context and attached credentials. This is convenient and can be quite powerful, but it also means that more context is attached to your activity, making it easier to infer information about you for advertising or surveillance purposes. This is dangerously close to the dystopian ideal of a "Digital ID".

The best way to deal with this risk is to consider identity re-use an option to be used when desirable, but to default to creating a new key for every identity you create. This is no worse than the status quo, and it makes room for the ability to link identities when desired.

Another possible approach to this problem is to avoid broadcasting signed data when possible. This could be done by obscuring your cryptographic identity when data is served from a database, or by encrypting your signed data in order to selectively share it with named counterparties.

Still, this is a real risk, and should be kept in mind when designing and using systems based on cryptographic identity. If you'd like to read more about this, please see [this blog post](https://habla.news/u/hodlbod@coracle.social/1687802006398).

# Making Keys Usable

You might be tempted to look at that list of trade-offs and get the sense that cryptographic identity is not for mere mortals. Key management is hard, and footguns abound - but there is a way forward. With [nostr](https://nostr.com/), some new things are happening in the world of key management that have never really happened before.

Plenty of work over the last 30 years has gone into making key management tractable, but none have really been widely adopted. The reason for this is simple: network effect.

Many of these older key systems only applied the thinnest veneer of humanity over keys. But an identity is much richer than a mere label. Having a real name, social connections, and a corpus of work to attach to a key creates a system of keys that _humans care about_.

By bootstrapping key management within a social context, nostr ensures that the payoff of key management is worth the learning curve. Not only is social engagement a strong incentive to get off the ground, people already on the network are eager to help you get past any roadblocks you might face.

So if I could offer an action item: give nostr a try today. Whether you're in it for the people and their values, or you just want to experiment with cryptographic identity, nostr is a great place to start. For a quick introduction and to securely generate keys, visit [njump.me](https://njump.me/).

Thanks for taking the time to read this post. I hope it's been helpful, and I can't wait to see you on nostr!

Author Public Key
npub1jlrs53pkdfjnts29kveljul2sm0actt6n8dxrrzqcersttvcuv3qdjynqn