EvoLensArt on Nostr: In an age where we have more access to information than any generation in human ...
In an age where we have more access to information than any generation in human history, you’d think we’d be getting better at separating fact from fiction.
Instead, we seem to be drowning in a sea of confusion, where half-truths and manipulated narratives gain traction faster than facts can keep up. The problem isn’t just the overwhelming quantity of information—it’s how that information is packaged, filtered, and consumed through the lens of our own cognitive biases. We’re no longer just reading the news; we’re being fed the version that best aligns with what we already believe.
Take, for example, the infamous claim that “Haitian immigrants are eating cats.” Sure, this sounds absurd at first glance, but the claim persists because, as with most rumors, it’s directionally correct. There may not be hordes of Haitians devouring household pets, but if someone, somewhere, happens to eat a cat, suddenly the rumor feels justified. The specifics don’t matter much. It’s not about Haitians eating cats—just the idea that someone is. This is how half-baked stories morph into accepted truth: they have just enough plausibility to get a foothold and survive, like intellectual parasites feeding on our preconceptions.
This is where Jonathan Haidt’s insight about how we process information comes into play. When we encounter evidence that supports our beliefs, we ask ourselves a low-bar question: “Can I believe this?” In other words, is there any flimsy justification I can latch onto to confirm what I already think? And if the evidence fits, however loosely, we embrace it. But when that same evidence contradicts our beliefs, we go into defense mode and ask a much tougher question: “Do I have to believe this?” Now, we’re searching for any reason, however small, to dismiss or deny what we’re seeing.
This kind of thinking leads to two groups of people looking at the same information but coming to completely opposite conclusions. One group focuses on specifics—“It wasn’t a Haitian eating the cat, so this story is a lie.” The other group, meanwhile, is more than happy to generalize: “Maybe it wasn’t a Haitian, but someone’s eating cats, so the story is true enough.” In both cases, facts are playing second fiddle to biases, and the truth becomes a casualty of our own mental shortcuts.
If this is how people react to even legitimate information, imagine how much worse it gets when the evidence itself is deliberately manipulated. Take the 2017 “fine people” hoax surrounding Donald Trump’s comments on Charlottesville. Trump did, in fact, condemn neo-Nazis and white supremacists, but a selectively edited clip of him saying there were “very fine people on both sides” was stripped of context and played on a loop by the mainstream media. That sliver of a sound bite became gospel for those who already saw Trump as a villain. The context? Irrelevant. People who wanted to believe Trump was soft on neo-Nazis got exactly the confirmation they were looking for. Even now, seven years later, in 2024, you’ll still find people who genuinely believe Trump refused to call out white supremacists.
This is the dark art of selective editing. The facts themselves haven’t changed, but by cutting out key details, the entire narrative shifts. For some, this manipulated version of events is enough to keep the “Can I believe this?” train rolling full steam ahead. Meanwhile, those skeptical of the media look at the unedited footage, note that Trump did indeed disavow the neo-Nazis, and ask themselves, “Do I have to believe this?” For them, the deception is clear, and they find ways to reject the entire mainstream narrative. Again, two groups walk away with two very different realities based on the same raw information—filtered, of course, through their own biases.
But this isn’t just about Trump or immigrants. It’s about the broader problem of how we digest information in a world that has become saturated with data yet starved of clarity. We now live in an age where evidence itself can be suspect—thanks to deceptively edited videos, out-of-context quotes, and the rise of AI-generated content that can fake reality with frightening precision. If we already struggle to separate fact from fiction when the evidence is legitimate, what happens when the very evidence we rely on is manufactured or distorted?
We’re outsourcing our thinking, trusting intermediaries—whether they be journalists, politicians, or algorithms—to make sense of the world for us. And these intermediaries are anything but neutral. They bring their own biases, errors, and occasionally, deceptions to the table. When we outsource our critical thinking, we inherit not just the conclusions but the cognitive shortcuts and errors that come with them. The result is a kind of selective outrage, where the “truth” is little more than an echo of our preconceived beliefs, amplified by a feedback loop of misinformation.
The real danger is that once an idea takes root, it’s almost impossible to dislodge, no matter how many facts you throw at it. Whether it’s a story about cat-eating immigrants or Trump’s alleged defense of neo-Nazis, these narratives persist because they’ve been carefully molded to fit into the emotional and cognitive framework we use to interpret the world. They’re not falsifiable in any meaningful way—because when the goalposts keep moving, you can’t ever pin them down.
So here we are, in a world where information is infinite but understanding is finite. We navigate this ocean of data by leaning on others, trusting that they’ve done the hard work of making sense of it all for us. But those intermediaries can mislead us, intentionally or not. And once misled, it’s hard to find our way back to reality. The line between fact and perception blurs, and we become trapped in our own echo chambers, comforted by half-truths that feel good, even when they’re far from the truth.
In the end, it’s not just about what we believe—it’s about how we come to believe it. And in a world where truth is up for grabs, the only thing more dangerous than being wrong is thinking you’re always right.
The COVID pandemic provided a masterclass in how the flow of information can be manipulated, suppressed, and weaponized depending on whose interests are at stake. The lab leak theory is a perfect example. Early on, suggesting that COVID might have originated in a lab—specifically in Wuhan—wasn’t just dismissed, it was vilified. Anyone who brought it up was labeled a conspiracy theorist, a xenophobe, or outright racist. Platforms censored people, media outlets sneered at the idea, and it was effectively scrubbed from polite discourse.
Fast forward a couple of years, and what do we have? A growing consensus among scientists and officials that the lab leak theory is, in fact, one of the most plausible explanations for COVID’s origins. This whiplash—where a once “crazy” theory is now taken seriously—exposes how information is managed not based on its truth, but on its political utility at the time. It’s a case study in how, when certain ideas become politically or socially inconvenient, they’re not debated—they’re silenced.
The same thing happened with ivermectin. Here’s a drug that’s been used on humans for decades, a Nobel Prize-winning medicine, no less, known for its safety and effectiveness in treating parasitic infections. Yet, when people started talking about its potential to treat COVID, the media ran with a condescending narrative, dismissing it as “horse dewormer,” as though people were injecting themselves with straight livestock meds in some backwoods pharmacy. Yes, ivermectin is used for animals, but it’s also an FDA-approved drug for human use. The dismissiveness wasn’t about facts—it was about delegitimizing an alternative treatment that, crucially, wasn’t under patent. It wasn’t profitable for the pharmaceutical giants that were working on patented vaccines and treatments.
So, why was there such a harsh crackdown on discussions around ivermectin? Again, follow the money. Unlike the vaccines, ivermectin was dirt cheap, widely available, and off-patent. The idea that something inexpensive and readily accessible might be effective against COVID presented a massive financial threat to Big Pharma, which had billions tied up in the development, production, and distribution of new treatments. Combine that with governments and tech companies getting involved in controlling the narrative, and you have a situation where an established, safe drug is reduced to a punchline. The heavy hand of the state wasn’t just regulating public health—it was regulating the conversation itself.
What this all highlights is the sheer danger of centralized control over information, particularly in the digital age. The power to not only spread misinformation, but to silence dissent, has never been greater. Social media platforms, under pressure from governments and corporations, can decide what’s acceptable discourse and what’s “misinformation” with the flick of a switch. Overnight, entire narratives are erased, not because they’re false, but because they’re inconvenient. The COVID pandemic was a grim revelation for many of us who might’ve once trusted these institutions, but now see how easily they can abuse their power.
This goes beyond just media and medicine—it’s about the control of thought itself. When you have Big Tech, Big Pharma, and Big Government working in concert to manage what people are allowed to think and say, that’s not just dangerous—it’s dystopian. You don’t have to be a right-wing conspiracy theorist to feel deeply uncomfortable about the way power is concentrated in the hands of a few, and how those few can shape reality for the rest of us.
Like many of you, I come from an emotional and intellectual space that’s more liberal-leaning. I believe in public health, in collective responsibility, in protecting the vulnerable. But after witnessing how COVID narratives were manipulated—how legitimate questions were squashed, how cheap, effective treatments were mocked and marginalized—I can no longer ignore the inherent dangers of centralized power. Whether it’s the government, corporations, or tech giants, the more concentrated that power becomes, the more vulnerable we are to its abuses.
This isn’t a left or right issue. It’s about control—control of information, control of dialogue, and ultimately, control of what we’re allowed to think. And when you realize how easily that control can be wielded, it’s hard not to feel cynical. It’s hard not to question whether what you’re being told is true, or just another layer of convenient lies wrapped in the flag of public safety.
But let’s not sink into cynicism!
The fact that we’ve recognized the problem is the first step toward a solution, and there **are** solutions!
Enter NOSTR—Notes and Other Stuff Transmitted by Relays. It’s a mouthful, but the concept behind it is simple, and more importantly, revolutionary: decentralized, censorship-resistant communication. A system where control doesn’t sit in the hands of a few powerful entities but is distributed, open, and unstoppable.
The beauty of NOSTR lies in its architecture.
Instead of relying on centralized servers, where Big Tech or governments can swoop in and control the flow of information, it uses relays. Think of these as independent nodes, any one of which can carry your message. No single relay is in charge, so even if one goes down or gets blocked, your message can still find its way through other relays. It’s the digital equivalent of Hydra—cut off one head, and two more pop up. The system is resilient by design.
Here’s the kicker: it’s not just about censorship resistance. It’s about taking back control over your own communication. With NOSTR, you own your identity and your data. You’re not at the mercy of some algorithm that decides whether your content gets throttled, shadow-banned, or erased. It’s a communication protocol that can’t be stopped or manipulated by outside forces, which means you’re finally free to speak your mind without fear of being silenced by the gatekeepers of the digital world.
Remember when platforms like Twitter or Facebook started de-platforming voices they didn’t agree with, or when fact-checkers suddenly became the arbiters of truth? Those days don’t exist in the world of NOSTR. It doesn’t matter what side of the political spectrum you fall on; the core principle here is simple: free, unstoppable communication. No more worries about having your account shut down for sharing a controversial opinion or an inconvenient truth. If someone wants to block you? No problem—just use another relay.
It’s not just a pipe dream, either. We’ve already seen how decentralized systems like Bitcoin have challenged the traditional financial world by eliminating middlemen and putting control back into the hands of individuals. NOSTR does the same thing, but for communication. It takes the power out of the hands of Silicon Valley and returns it to the people who actually need it: all of us.
This is the antidote to the censorship that we’ve seen escalate over the last few years. Whether it was the suppression of the lab leak theory, the mocking of alternative COVID treatments, or the broader crackdown on anyone stepping outside the approved narrative, NOSTR provides the infrastructure to ensure that these ideas still have a place to be heard.
In an era where information is power, decentralized communication protocols like NOSTR give that power back to the individual. It’s a solution that not only combats censorship but also provides a blueprint for a more open, transparent future, where ideas—both popular and unpopular—can compete on an equal playing field.
Here’s the crucial part: NOSTR isn’t a product, it’s a protocol. It’s not a platform controlled by some mega-corporation or subject to the whims of a CEO, and it’s not tied to any particular app or service. It’s just a set of rules that allows anyone to build their own app, service, or interface that interacts with the broader system.
This is why it’s such a game-changer—it’s a foundation, not a walled garden.
Anyone, anywhere, can develop an application that taps into this decentralized network of communication, and each app can look and feel totally different, depending on who builds it. But the core principle stays the same: unstoppable, censorship-resistant communication.
Now, let’s get into the identity aspect, because this is where NOSTR really pulls away from the centralized platforms we’re used to:
Most social media platforms own your identity. You sign up with an email or phone number, and from there, your identity and everything tied to it—your friends, followers, posts, interactions—are locked into that one platform.
If you get banned, shadow-banned, or just want to leave? Tough. You’re stuck starting from scratch somewhere else.
NOSTR throws that model out the window. Your identity is tied to public and private keys—not some account managed by Twitter or Facebook. Think of it like having your own digital passport, one that no platform or company can take away from you. Your public key acts as your username or ID, while your private key is your password, securing your identity.
This means you control your identity, not the platform you’re on. If you don’t like one app or interface, you can pick up your entire social graph—your followers, your posts, your interactions—and move it to another app without losing anything. You take your identity with you wherever you go, untethered from any one company or platform.
And this isn’t just about portability—it’s about resilience. Your identity and your social graph can’t be deleted by some tech overlord. If you get booted off one relay or app, it doesn’t matter. You still exist in the NOSTR network because you control the keys to your identity. It’s a system that guarantees your ability to communicate remains intact, no matter what roadblocks someone tries to throw in your way.
In essence, this isn’t just about speaking freely—it’s about owning your digital self. With NOSTR, you’re not at the mercy of any one platform’s policies, and you’re not trapped in a system where your data and identity can be erased at the push of a button. It’s robust, decentralized, and designed for a future where individuals, not corporations, own their digital lives.
The persistence of your identity in a system like NOSTR isn’t just a technical detail—it’s a transformative tool for taking control of the way you interact with information. When you’re locked into centralized platforms like Facebook or Twitter, you’re at the mercy of their algorithms. These algorithms are driven by perverse incentives—often prioritizing engagement (and, by extension, profit) over truth. That means they’re engineered to feed you content that provokes the strongest emotional reactions, not necessarily the most accurate or nuanced information. Outrage sells; truth, unfortunately, tends to sit quietly in the back.
But with a protocol like NOSTR, where your identity is yours to carry from one app to the next, you control how you filter information. You can choose your own algorithms or create your own filters based on what you actually want to see—whether that’s verified, nuanced reporting, or even just a broader range of opinions that challenge your perspective. You’re no longer subject to what some opaque corporate algorithm decides is “relevant” or “trending.”
In the current media landscape, the platforms you use decide what’s important for you to see, driving echo chambers and feeding you the content that keeps you scrolling and clicking. But with NOSTR, the power shifts back to you. You can create or adopt algorithms based on trust, transparency, and diversity of thought, rather than algorithms designed to harvest your attention. Instead of having your reality mediated by Facebook’s or Twitter’s content farms, you curate your own information landscape, choosing the sources, voices, and perspectives that you want to engage with.
This isn’t just a subtle shift. It’s foundational. It allows for a more honest, individualized approach to truth. Rather than letting Zuckerberg or some shadowy content moderator decide which “facts” are fit for your consumption, you become an active participant in shaping your own information diet. Want to see more diverse viewpoints? You can set up filters that prioritize those voices. Want verified experts? You can configure your experience to highlight content from trusted sources without burying them under a pile of conspiracy theories or viral clickbait.
The persistence of your identity in NOSTR allows you to move through different applications, preserving your network, your preferences, and the integrity of your interactions. And since no single company or platform owns you, they can’t manipulate what you see, who you follow, or what content is pushed to the top of your feed. You’re no longer a passive consumer of information being served to you by algorithms that prioritize profit. Instead, you become the architect of your own reality—able to engage with information in a way that’s thoughtful, self-directed, and resistant to manipulation.
In a world where truth is often a casualty of the attention economy, systems like NOSTR offer a way to reclaim not just free speech, but free thought. No longer a mere cog in someone else’s engagement machine, you are actively participating in the pursuit of truth, on your own terms.
The shift from passive consumption to active curation—where you control the flow of information—changes the entire game.
With NOSTR, you’re not just scrolling through a feed dictated by an algorithm designed to maximize profit by feeding you outrage or distraction. You are the architect of your own digital environment. You can prioritize content from trusted sources, craft your own filters, and choose algorithms that emphasize depth over clickbait. You regain control over how you engage with ideas, filtering out the noise and allowing truth to take center stage.
But it doesn’t stop there. This paradigm shift gets even more profound when you introduce Zaps into the equation.
Unlike traditional social media metrics—likes, retweets, or upvotes—which are essentially free and therefore hollow, Zaps carry real value. A Zap is not just an empty gesture; it’s a micro-transaction using Bitcoin, the world’s first decentralized digital currency.
Now, I know for some, Bitcoin can seem niche or counterintuitive, so let’s break it down:
Bitcoin has a fixed supply—there will only ever be 21 million Bitcoin. That’s it. Its scarcity makes it valuable, like digital gold.
(Much like a dollar is divided into 100 cents, each Bitcoin can be divided into 100 million “satoshis”, or “sats”.)
So when someone sends you a Zap, they’re not just clicking a button—they’re sending you satoshis, actual money tied to a finite resource.
This means Zaps are fundamentally different from likes or upvotes. Sending a Zap requires spending something of real value. Unlike the likes you hand out on Facebook by the dozens, a Zap forces the sender to put some real skin in the game. That’s why it’s so much harder to game this system. Yes, bots can still exist—but every time a bot Zaps content, it’s spending satoshis, burning through real currency. Spamming Zaps costs real money, making it far more difficult (and expensive) to manipulate the system.
Now, imagine a world where Zaps become part of the algorithm itself. Instead of sorting content by shallow engagement—measured in meaningless clicks—you can rank it by the value people are attaching to it in the form of Zaps. You could filter your content feed by how much real currency has been exchanged in support of different posts, giving you a more accurate signal of what’s truly valuable, insightful, or meaningful.
It’s a shift from attention-driven metrics to value-driven ones. In this new system, creators aren’t incentivized to churn out low-quality content for the sake of clicks and likes. Instead, they focus on producing work that’s meaningful enough for people to Zap—because a Zap reflects genuine, monetary support. It transforms the economy of content from one built on mindless engagement to one based on authenticity and real value.
And this entire system—this exchange of value—happens over the Lightning Network.
For those unfamiliar, the Lightning Network is a second-layer solution built on top of Bitcoin, designed to enable nearly instantaneous, low-cost transactions.
When you send a Zap, it’s processed through the Lightning Network, and the transaction is final—there are no middlemen skimming fees off the top, no centralized payment processors taking a cut. It’s peer-to-peer, direct and immediate.
Contrast this with the current systems we rely on—banks, payment processors, ad networks—all of which take their pound of flesh at every turn. There are fees, delays, and often a host of third parties between you and your money.
With the Lightning Network, that friction disappears. You can Zap someone from across the globe in seconds, with the transaction finalized and the sats transferred instantly. No waiting, no approvals, no gatekeepers.
And here’s the most amazing thing about all of this: it’s not a dream. It’s not some far-off goal. It’s a practical reality. This system already exists today.
NOSTR, Bitcoin, Zaps—this isn’t the future we’re waiting for. The future is here. It’s just not evenly distributed yet. The tools to break free from the grip of centralized platforms and take back control of communication, content, and value are already in our hands.
The genie is out of the bottle. The toothpaste is out of the tube. These are more than technologies—they are ideas. And here’s the thing about ideas: you can’t kill them. Bitcoin and NOSTR aren’t just systems—they represent a new way of thinking about ownership, value, and freedom. And once an idea takes hold, it’s unstoppable.
Instead, we seem to be drowning in a sea of confusion, where half-truths and manipulated narratives gain traction faster than facts can keep up. The problem isn’t just the overwhelming quantity of information—it’s how that information is packaged, filtered, and consumed through the lens of our own cognitive biases. We’re no longer just reading the news; we’re being fed the version that best aligns with what we already believe.
Take, for example, the infamous claim that “Haitian immigrants are eating cats.” Sure, this sounds absurd at first glance, but the claim persists because, as with most rumors, it’s directionally correct. There may not be hordes of Haitians devouring household pets, but if someone, somewhere, happens to eat a cat, suddenly the rumor feels justified. The specifics don’t matter much. It’s not about Haitians eating cats—just the idea that someone is. This is how half-baked stories morph into accepted truth: they have just enough plausibility to get a foothold and survive, like intellectual parasites feeding on our preconceptions.
This is where Jonathan Haidt’s insight about how we process information comes into play. When we encounter evidence that supports our beliefs, we ask ourselves a low-bar question: “Can I believe this?” In other words, is there any flimsy justification I can latch onto to confirm what I already think? And if the evidence fits, however loosely, we embrace it. But when that same evidence contradicts our beliefs, we go into defense mode and ask a much tougher question: “Do I have to believe this?” Now, we’re searching for any reason, however small, to dismiss or deny what we’re seeing.
This kind of thinking leads to two groups of people looking at the same information but coming to completely opposite conclusions. One group focuses on specifics—“It wasn’t a Haitian eating the cat, so this story is a lie.” The other group, meanwhile, is more than happy to generalize: “Maybe it wasn’t a Haitian, but someone’s eating cats, so the story is true enough.” In both cases, facts are playing second fiddle to biases, and the truth becomes a casualty of our own mental shortcuts.
If this is how people react to even legitimate information, imagine how much worse it gets when the evidence itself is deliberately manipulated. Take the 2017 “fine people” hoax surrounding Donald Trump’s comments on Charlottesville. Trump did, in fact, condemn neo-Nazis and white supremacists, but a selectively edited clip of him saying there were “very fine people on both sides” was stripped of context and played on a loop by the mainstream media. That sliver of a sound bite became gospel for those who already saw Trump as a villain. The context? Irrelevant. People who wanted to believe Trump was soft on neo-Nazis got exactly the confirmation they were looking for. Even now, seven years later, in 2024, you’ll still find people who genuinely believe Trump refused to call out white supremacists.
This is the dark art of selective editing. The facts themselves haven’t changed, but by cutting out key details, the entire narrative shifts. For some, this manipulated version of events is enough to keep the “Can I believe this?” train rolling full steam ahead. Meanwhile, those skeptical of the media look at the unedited footage, note that Trump did indeed disavow the neo-Nazis, and ask themselves, “Do I have to believe this?” For them, the deception is clear, and they find ways to reject the entire mainstream narrative. Again, two groups walk away with two very different realities based on the same raw information—filtered, of course, through their own biases.
But this isn’t just about Trump or immigrants. It’s about the broader problem of how we digest information in a world that has become saturated with data yet starved of clarity. We now live in an age where evidence itself can be suspect—thanks to deceptively edited videos, out-of-context quotes, and the rise of AI-generated content that can fake reality with frightening precision. If we already struggle to separate fact from fiction when the evidence is legitimate, what happens when the very evidence we rely on is manufactured or distorted?
We’re outsourcing our thinking, trusting intermediaries—whether they be journalists, politicians, or algorithms—to make sense of the world for us. And these intermediaries are anything but neutral. They bring their own biases, errors, and occasionally, deceptions to the table. When we outsource our critical thinking, we inherit not just the conclusions but the cognitive shortcuts and errors that come with them. The result is a kind of selective outrage, where the “truth” is little more than an echo of our preconceived beliefs, amplified by a feedback loop of misinformation.
The real danger is that once an idea takes root, it’s almost impossible to dislodge, no matter how many facts you throw at it. Whether it’s a story about cat-eating immigrants or Trump’s alleged defense of neo-Nazis, these narratives persist because they’ve been carefully molded to fit into the emotional and cognitive framework we use to interpret the world. They’re not falsifiable in any meaningful way—because when the goalposts keep moving, you can’t ever pin them down.
So here we are, in a world where information is infinite but understanding is finite. We navigate this ocean of data by leaning on others, trusting that they’ve done the hard work of making sense of it all for us. But those intermediaries can mislead us, intentionally or not. And once misled, it’s hard to find our way back to reality. The line between fact and perception blurs, and we become trapped in our own echo chambers, comforted by half-truths that feel good, even when they’re far from the truth.
In the end, it’s not just about what we believe—it’s about how we come to believe it. And in a world where truth is up for grabs, the only thing more dangerous than being wrong is thinking you’re always right.
The COVID pandemic provided a masterclass in how the flow of information can be manipulated, suppressed, and weaponized depending on whose interests are at stake. The lab leak theory is a perfect example. Early on, suggesting that COVID might have originated in a lab—specifically in Wuhan—wasn’t just dismissed, it was vilified. Anyone who brought it up was labeled a conspiracy theorist, a xenophobe, or outright racist. Platforms censored people, media outlets sneered at the idea, and it was effectively scrubbed from polite discourse.
Fast forward a couple of years, and what do we have? A growing consensus among scientists and officials that the lab leak theory is, in fact, one of the most plausible explanations for COVID’s origins. This whiplash—where a once “crazy” theory is now taken seriously—exposes how information is managed not based on its truth, but on its political utility at the time. It’s a case study in how, when certain ideas become politically or socially inconvenient, they’re not debated—they’re silenced.
The same thing happened with ivermectin. Here’s a drug that’s been used on humans for decades, a Nobel Prize-winning medicine, no less, known for its safety and effectiveness in treating parasitic infections. Yet, when people started talking about its potential to treat COVID, the media ran with a condescending narrative, dismissing it as “horse dewormer,” as though people were injecting themselves with straight livestock meds in some backwoods pharmacy. Yes, ivermectin is used for animals, but it’s also an FDA-approved drug for human use. The dismissiveness wasn’t about facts—it was about delegitimizing an alternative treatment that, crucially, wasn’t under patent. It wasn’t profitable for the pharmaceutical giants that were working on patented vaccines and treatments.
So, why was there such a harsh crackdown on discussions around ivermectin? Again, follow the money. Unlike the vaccines, ivermectin was dirt cheap, widely available, and off-patent. The idea that something inexpensive and readily accessible might be effective against COVID presented a massive financial threat to Big Pharma, which had billions tied up in the development, production, and distribution of new treatments. Combine that with governments and tech companies getting involved in controlling the narrative, and you have a situation where an established, safe drug is reduced to a punchline. The heavy hand of the state wasn’t just regulating public health—it was regulating the conversation itself.
What this all highlights is the sheer danger of centralized control over information, particularly in the digital age. The power to not only spread misinformation, but to silence dissent, has never been greater. Social media platforms, under pressure from governments and corporations, can decide what’s acceptable discourse and what’s “misinformation” with the flick of a switch. Overnight, entire narratives are erased, not because they’re false, but because they’re inconvenient. The COVID pandemic was a grim revelation for many of us who might’ve once trusted these institutions, but now see how easily they can abuse their power.
This goes beyond just media and medicine—it’s about the control of thought itself. When you have Big Tech, Big Pharma, and Big Government working in concert to manage what people are allowed to think and say, that’s not just dangerous—it’s dystopian. You don’t have to be a right-wing conspiracy theorist to feel deeply uncomfortable about the way power is concentrated in the hands of a few, and how those few can shape reality for the rest of us.
Like many of you, I come from an emotional and intellectual space that’s more liberal-leaning. I believe in public health, in collective responsibility, in protecting the vulnerable. But after witnessing how COVID narratives were manipulated—how legitimate questions were squashed, how cheap, effective treatments were mocked and marginalized—I can no longer ignore the inherent dangers of centralized power. Whether it’s the government, corporations, or tech giants, the more concentrated that power becomes, the more vulnerable we are to its abuses.
This isn’t a left or right issue. It’s about control—control of information, control of dialogue, and ultimately, control of what we’re allowed to think. And when you realize how easily that control can be wielded, it’s hard not to feel cynical. It’s hard not to question whether what you’re being told is true, or just another layer of convenient lies wrapped in the flag of public safety.
But let’s not sink into cynicism!
The fact that we’ve recognized the problem is the first step toward a solution, and there **are** solutions!
Enter NOSTR—Notes and Other Stuff Transmitted by Relays. It’s a mouthful, but the concept behind it is simple, and more importantly, revolutionary: decentralized, censorship-resistant communication. A system where control doesn’t sit in the hands of a few powerful entities but is distributed, open, and unstoppable.
The beauty of NOSTR lies in its architecture.
Instead of relying on centralized servers, where Big Tech or governments can swoop in and control the flow of information, it uses relays. Think of these as independent nodes, any one of which can carry your message. No single relay is in charge, so even if one goes down or gets blocked, your message can still find its way through other relays. It’s the digital equivalent of Hydra—cut off one head, and two more pop up. The system is resilient by design.
Here’s the kicker: it’s not just about censorship resistance. It’s about taking back control over your own communication. With NOSTR, you own your identity and your data. You’re not at the mercy of some algorithm that decides whether your content gets throttled, shadow-banned, or erased. It’s a communication protocol that can’t be stopped or manipulated by outside forces, which means you’re finally free to speak your mind without fear of being silenced by the gatekeepers of the digital world.
Remember when platforms like Twitter or Facebook started de-platforming voices they didn’t agree with, or when fact-checkers suddenly became the arbiters of truth? Those days don’t exist in the world of NOSTR. It doesn’t matter what side of the political spectrum you fall on; the core principle here is simple: free, unstoppable communication. No more worries about having your account shut down for sharing a controversial opinion or an inconvenient truth. If someone wants to block you? No problem—just use another relay.
It’s not just a pipe dream, either. We’ve already seen how decentralized systems like Bitcoin have challenged the traditional financial world by eliminating middlemen and putting control back into the hands of individuals. NOSTR does the same thing, but for communication. It takes the power out of the hands of Silicon Valley and returns it to the people who actually need it: all of us.
This is the antidote to the censorship that we’ve seen escalate over the last few years. Whether it was the suppression of the lab leak theory, the mocking of alternative COVID treatments, or the broader crackdown on anyone stepping outside the approved narrative, NOSTR provides the infrastructure to ensure that these ideas still have a place to be heard.
In an era where information is power, decentralized communication protocols like NOSTR give that power back to the individual. It’s a solution that not only combats censorship but also provides a blueprint for a more open, transparent future, where ideas—both popular and unpopular—can compete on an equal playing field.
Here’s the crucial part: NOSTR isn’t a product, it’s a protocol. It’s not a platform controlled by some mega-corporation or subject to the whims of a CEO, and it’s not tied to any particular app or service. It’s just a set of rules that allows anyone to build their own app, service, or interface that interacts with the broader system.
This is why it’s such a game-changer—it’s a foundation, not a walled garden.
Anyone, anywhere, can develop an application that taps into this decentralized network of communication, and each app can look and feel totally different, depending on who builds it. But the core principle stays the same: unstoppable, censorship-resistant communication.
Now, let’s get into the identity aspect, because this is where NOSTR really pulls away from the centralized platforms we’re used to:
Most social media platforms own your identity. You sign up with an email or phone number, and from there, your identity and everything tied to it—your friends, followers, posts, interactions—are locked into that one platform.
If you get banned, shadow-banned, or just want to leave? Tough. You’re stuck starting from scratch somewhere else.
NOSTR throws that model out the window. Your identity is tied to public and private keys—not some account managed by Twitter or Facebook. Think of it like having your own digital passport, one that no platform or company can take away from you. Your public key acts as your username or ID, while your private key is your password, securing your identity.
This means you control your identity, not the platform you’re on. If you don’t like one app or interface, you can pick up your entire social graph—your followers, your posts, your interactions—and move it to another app without losing anything. You take your identity with you wherever you go, untethered from any one company or platform.
And this isn’t just about portability—it’s about resilience. Your identity and your social graph can’t be deleted by some tech overlord. If you get booted off one relay or app, it doesn’t matter. You still exist in the NOSTR network because you control the keys to your identity. It’s a system that guarantees your ability to communicate remains intact, no matter what roadblocks someone tries to throw in your way.
In essence, this isn’t just about speaking freely—it’s about owning your digital self. With NOSTR, you’re not at the mercy of any one platform’s policies, and you’re not trapped in a system where your data and identity can be erased at the push of a button. It’s robust, decentralized, and designed for a future where individuals, not corporations, own their digital lives.
The persistence of your identity in a system like NOSTR isn’t just a technical detail—it’s a transformative tool for taking control of the way you interact with information. When you’re locked into centralized platforms like Facebook or Twitter, you’re at the mercy of their algorithms. These algorithms are driven by perverse incentives—often prioritizing engagement (and, by extension, profit) over truth. That means they’re engineered to feed you content that provokes the strongest emotional reactions, not necessarily the most accurate or nuanced information. Outrage sells; truth, unfortunately, tends to sit quietly in the back.
But with a protocol like NOSTR, where your identity is yours to carry from one app to the next, you control how you filter information. You can choose your own algorithms or create your own filters based on what you actually want to see—whether that’s verified, nuanced reporting, or even just a broader range of opinions that challenge your perspective. You’re no longer subject to what some opaque corporate algorithm decides is “relevant” or “trending.”
In the current media landscape, the platforms you use decide what’s important for you to see, driving echo chambers and feeding you the content that keeps you scrolling and clicking. But with NOSTR, the power shifts back to you. You can create or adopt algorithms based on trust, transparency, and diversity of thought, rather than algorithms designed to harvest your attention. Instead of having your reality mediated by Facebook’s or Twitter’s content farms, you curate your own information landscape, choosing the sources, voices, and perspectives that you want to engage with.
This isn’t just a subtle shift. It’s foundational. It allows for a more honest, individualized approach to truth. Rather than letting Zuckerberg or some shadowy content moderator decide which “facts” are fit for your consumption, you become an active participant in shaping your own information diet. Want to see more diverse viewpoints? You can set up filters that prioritize those voices. Want verified experts? You can configure your experience to highlight content from trusted sources without burying them under a pile of conspiracy theories or viral clickbait.
The persistence of your identity in NOSTR allows you to move through different applications, preserving your network, your preferences, and the integrity of your interactions. And since no single company or platform owns you, they can’t manipulate what you see, who you follow, or what content is pushed to the top of your feed. You’re no longer a passive consumer of information being served to you by algorithms that prioritize profit. Instead, you become the architect of your own reality—able to engage with information in a way that’s thoughtful, self-directed, and resistant to manipulation.
In a world where truth is often a casualty of the attention economy, systems like NOSTR offer a way to reclaim not just free speech, but free thought. No longer a mere cog in someone else’s engagement machine, you are actively participating in the pursuit of truth, on your own terms.
The shift from passive consumption to active curation—where you control the flow of information—changes the entire game.
With NOSTR, you’re not just scrolling through a feed dictated by an algorithm designed to maximize profit by feeding you outrage or distraction. You are the architect of your own digital environment. You can prioritize content from trusted sources, craft your own filters, and choose algorithms that emphasize depth over clickbait. You regain control over how you engage with ideas, filtering out the noise and allowing truth to take center stage.
But it doesn’t stop there. This paradigm shift gets even more profound when you introduce Zaps into the equation.
Unlike traditional social media metrics—likes, retweets, or upvotes—which are essentially free and therefore hollow, Zaps carry real value. A Zap is not just an empty gesture; it’s a micro-transaction using Bitcoin, the world’s first decentralized digital currency.
Now, I know for some, Bitcoin can seem niche or counterintuitive, so let’s break it down:
Bitcoin has a fixed supply—there will only ever be 21 million Bitcoin. That’s it. Its scarcity makes it valuable, like digital gold.
(Much like a dollar is divided into 100 cents, each Bitcoin can be divided into 100 million “satoshis”, or “sats”.)
So when someone sends you a Zap, they’re not just clicking a button—they’re sending you satoshis, actual money tied to a finite resource.
This means Zaps are fundamentally different from likes or upvotes. Sending a Zap requires spending something of real value. Unlike the likes you hand out on Facebook by the dozens, a Zap forces the sender to put some real skin in the game. That’s why it’s so much harder to game this system. Yes, bots can still exist—but every time a bot Zaps content, it’s spending satoshis, burning through real currency. Spamming Zaps costs real money, making it far more difficult (and expensive) to manipulate the system.
Now, imagine a world where Zaps become part of the algorithm itself. Instead of sorting content by shallow engagement—measured in meaningless clicks—you can rank it by the value people are attaching to it in the form of Zaps. You could filter your content feed by how much real currency has been exchanged in support of different posts, giving you a more accurate signal of what’s truly valuable, insightful, or meaningful.
It’s a shift from attention-driven metrics to value-driven ones. In this new system, creators aren’t incentivized to churn out low-quality content for the sake of clicks and likes. Instead, they focus on producing work that’s meaningful enough for people to Zap—because a Zap reflects genuine, monetary support. It transforms the economy of content from one built on mindless engagement to one based on authenticity and real value.
And this entire system—this exchange of value—happens over the Lightning Network.
For those unfamiliar, the Lightning Network is a second-layer solution built on top of Bitcoin, designed to enable nearly instantaneous, low-cost transactions.
When you send a Zap, it’s processed through the Lightning Network, and the transaction is final—there are no middlemen skimming fees off the top, no centralized payment processors taking a cut. It’s peer-to-peer, direct and immediate.
Contrast this with the current systems we rely on—banks, payment processors, ad networks—all of which take their pound of flesh at every turn. There are fees, delays, and often a host of third parties between you and your money.
With the Lightning Network, that friction disappears. You can Zap someone from across the globe in seconds, with the transaction finalized and the sats transferred instantly. No waiting, no approvals, no gatekeepers.
And here’s the most amazing thing about all of this: it’s not a dream. It’s not some far-off goal. It’s a practical reality. This system already exists today.
NOSTR, Bitcoin, Zaps—this isn’t the future we’re waiting for. The future is here. It’s just not evenly distributed yet. The tools to break free from the grip of centralized platforms and take back control of communication, content, and value are already in our hands.
The genie is out of the bottle. The toothpaste is out of the tube. These are more than technologies—they are ideas. And here’s the thing about ideas: you can’t kill them. Bitcoin and NOSTR aren’t just systems—they represent a new way of thinking about ownership, value, and freedom. And once an idea takes hold, it’s unstoppable.