What is Nostr?
SimplifiedPrivacy.com
npub14sl…t5d6
2024-08-24 00:06:09

SimplifiedPrivacy.com on Nostr: Who decides what to censor as spam? Nostr and Lens solve the "spam and scam" problem ...

Who decides what to censor as spam?

Nostr and Lens solve the "spam and scam" problem by having the client decide. For example Amethyst for android will hide posts from accounts that others report as scams. These "others" are defined by people you follow, but this essentially puts it up to a community vote of large influencers to silence you.

On Lens, once you're labeled spam, you appear in the "show more" of comments. This is a huge turn-off to new users with no followers, who are treated like lower class citizens.

Farcaster solves it in a similar way, but by having the official team label it, and then since their client is so large and influential, their list is often distributed to other clients. This is absolutely horrible and way too centralized. While it's true that posts to your followers would still show up, they are effectively silencing your comments.

Session has zero censorship for mass DMs in the way I use it, even under outright sanctions. The nodes don't even know I am the sender, and I'm assigned new receivers if they drop me. That's why I like it. But the market likes simpleX more because it rotates encryption keys, so it's tough to get new followers. Can't fight the market.

Bastyon solves the problem by a community vote for outright illegal content, to get it off the nodes, such as child porn and narcotics sales. The voters are picked based on their total upvotes, called "reputation". I disagree with this approach, as if we're going to vote, it should be the nodes hosting it (like Arweave does)...

Files on Arweave have an unofficial vote, where the nodes can opt out of storing it. And if all the miners chosen in a block opt out, then there's no financial penalty for dropping the content. But if they have the content and others don't, then they have a financial advantage to mine that block over competitors. This approach is good for websites, but for a social network with permissionless replies, it's way too passive.

Therefore:
I disagree with all these solutions.

In my view, the best way to handle spam (in a permissionless system) is to allow the original poster to decide which replies are spam. Then the end user can decide to toggle on or off "criticism and spam" for the replies. After all, if you're following someone, you trust their judgment on the subject they are speaking about. And this decentralizes the decision to each individual poster.

Now I do the ironic thing, and turn it over to my replies. Do you think this approach is right?
Author Public Key
npub14slk4lshtylkrqg9z0dvng09gn58h88frvnax7uga3v0h25szj4qzjt5d6