Clem on Nostr: Yeah. It’s a vulnerability in the whole social network. Three letter agencies / ...
Yeah.
It’s a vulnerability in the whole social network.
Three letter agencies / other social media companies scared of loosing users and thus advertising revenue, flood the competitions space using anon accounts with objectionable materials.
Then complain that there’s no content moderation, and government should regulate.
Governments kill the competition, and the social media companies and governments agencies behind it, suck up user data like normal and profit.
Circles of friends is definitely a way to mitigate it, but I do think we need some sort of shared block lists that people can opt into to completely filter that content.
Basically if multiple people report it, it makes it onto a temp blacklist, then setup an AI to scan the reported post and set a rating. With an appeal process through each relay running the lists.
We have to have some mechanism to keep this illegal content off the network.
I think multiple reports from multiple separate people, should trigger AI review and if found to be illegal, blacklist the key completely on that relay. Then, if the AI made a mistake, and IF people are deliberately abusing the system, and targeting users to get them blacklisted. The user can flag an appeal to the relay operators.
Just my 2c on how to get ahead of the incoming inevitable spam & nastiness.
Got to program our way out… need to build build build.
It’s a vulnerability in the whole social network.
Three letter agencies / other social media companies scared of loosing users and thus advertising revenue, flood the competitions space using anon accounts with objectionable materials.
Then complain that there’s no content moderation, and government should regulate.
Governments kill the competition, and the social media companies and governments agencies behind it, suck up user data like normal and profit.
Circles of friends is definitely a way to mitigate it, but I do think we need some sort of shared block lists that people can opt into to completely filter that content.
Basically if multiple people report it, it makes it onto a temp blacklist, then setup an AI to scan the reported post and set a rating. With an appeal process through each relay running the lists.
We have to have some mechanism to keep this illegal content off the network.
I think multiple reports from multiple separate people, should trigger AI review and if found to be illegal, blacklist the key completely on that relay. Then, if the AI made a mistake, and IF people are deliberately abusing the system, and targeting users to get them blacklisted. The user can flag an appeal to the relay operators.
Just my 2c on how to get ahead of the incoming inevitable spam & nastiness.
Got to program our way out… need to build build build.