s3x_JAY on Nostr: 1) The term “hate speech” has nothing to do with whether it’s criminal. Hate is ...
1) The term “hate speech” has nothing to do with whether it’s criminal. Hate is hate, whether it’s illegal or not.
2) The UN’s broad definition is based on them (collectively) seeing hate in a very broad range of situations. The fact that my example meets their definition bolsters the point I was making.
3) The fact that you’d choose to moderate it (presumably because it’s hateful) is exactly the point I was making.
As far as “protocol-level” vs “platform-level”… Protocol-level censorship is impossible on Nostr. There’s no point in discussing it. It’s a red herring.
“Platform-level” stuff is more complicated. There are very specific use cases that are being built on Nostr that are clearly “platforms” (e.g. the creator solution Mazin is building). Then there’s “kind 1” which I think is better called a “common area” shared by many platforms.
The protocol needs a way to let everyone experience the common areas without fear or harassment. Which in practical terms means letting their “community guardians” label things that are problems for their community as problems (like content matching their definition of “hate speech”).
Saying that labeling things as “hate speech” is problematic makes the problem worse, not better. We need to acknowledge the problem and address it.
2) The UN’s broad definition is based on them (collectively) seeing hate in a very broad range of situations. The fact that my example meets their definition bolsters the point I was making.
3) The fact that you’d choose to moderate it (presumably because it’s hateful) is exactly the point I was making.
As far as “protocol-level” vs “platform-level”… Protocol-level censorship is impossible on Nostr. There’s no point in discussing it. It’s a red herring.
“Platform-level” stuff is more complicated. There are very specific use cases that are being built on Nostr that are clearly “platforms” (e.g. the creator solution Mazin is building). Then there’s “kind 1” which I think is better called a “common area” shared by many platforms.
The protocol needs a way to let everyone experience the common areas without fear or harassment. Which in practical terms means letting their “community guardians” label things that are problems for their community as problems (like content matching their definition of “hate speech”).
Saying that labeling things as “hate speech” is problematic makes the problem worse, not better. We need to acknowledge the problem and address it.