What is Nostr?
mikedilger /
npub1acg…p35c
2024-04-26 20:16:23

mikedilger on Nostr: There have been a lot of ideas about dealing with unwanted content on nostr. I'm ...

There have been a lot of ideas about dealing with unwanted content on nostr. I'm going to try to break it down in this post

Part 1: Keeping unwanted content off of relays

This is done for two reasons. The first is legal: you could get in trouble by hosting illegal content. The second is to try to curate a set of content that is within some bounds of acceptability: perhaps flooding is not allowed, or spam posts about shitcoins are not allowed, maybe even mean posts are not allowed. It's up to the relay operator.

Early on people talked about Proof of Work, and this was meant to limit how fast a flooder or spammer could saturate your relay with junk, and therefore how much junk a moderator would have to look through. I don't know of any relay that went in this direction, and I don't think it's a great solution.

Then we saw paid relays. Paid relays only accept posts from their customers. This is a very effective solution. Customers can still break the rules but you have a smaller set of people that can do that and there are consequences.

But the downside with paid relays is if they cannot be used as inboxes. Ideally a relay would also work as an inbox for notes tagging any of your paid customers. Unfortunately in that case those responses can be floods, spam, or other unwanted content. So the same problem comes back around.

In the end, I think in order to support people getting messages from anybody, relays would need to inspect content and make judgements about it. And this is going to need to be automated. Email servers almost all do spam filtering using bayesian filters. We probably should be doing the same or similar. Maybe AI can play a role.

Part 2: Keeping unwanted content out of your own feed

The first thing clients can do is leverage Part 1. That is, use relays that do some of the work for you. Clients can avoid pulling global feed posts or thread replies from relays that aren't known to be managing content to the user's satisfaction.

The primary tool here is mute. Personal mute lists are a must. The downsides are that (1) they are post-facto, and (2) they cannot control for harassment from people who really want to harass and just keep making up new keypairs to repeat the harassment.

We can fix the post-facto issue to a large degree by having community mute lists (some people may call this 'blocking' but I don't want to confuse with the Twitter feature that doesn't allow a person to see your posts). This is where people of like mind subscribe to and manage a community mute list, and so when someone is muted, everybody benefits from it, meaning most people in that community won't see the offending post.

That doesn't solve problem 2, however. For that we have even more restrictive solutions

The first is the web-of-trust model. You only accept posts from people that you follow or who they follow. This is highly effective, but may silence posts you would have wanted to see.

The second is even more restrictive: private group conversations.

Finally I will mention two additional related features: thread dismissal and content warnings.

That's it. GM nostr!

Author Public Key
npub1acg6thl5psv62405rljzkj8spesceyfz2c32udakc2ak0dmvfeyse9p35c