abadidea on Nostr: You may not know me as such, but I have spent a lot of time moderating a lot of ...
You may not know me as such, but I have spent a lot of time moderating a lot of platforms; I spent two years as a moderator for one of the biggest chats on twitch. Megasites like Twitter try as hard as they can to hide moderation and make it seem as fully automatic and neutral as possible, but the reality is that anything more complicated than automatically blocking slurs is emotional labor by a specific human being. If you click "report" on a post, you are saying it is so dire, so urgent, that you need to put up the Bat Signal so Civility Batman (who, on mastodon and many other places, is likely an unpaid volunteer) knows to take the time and energy to swoop in and resolve the situation.
And that button exists for a reason! You can and should click it if there's a good cause. But moderators often find themselves drained by a barrage of "posts/users that I didn't like" rather than "posts/users that are active hazards." They have to take the time to look around and make a judgment call on whether there's something they're missing and it really is a dire post, or just someone being annoyed by a personal pet peeve. I encourage you to keep the human element on the other end in mind when deciding what your personal threshold is for reporting posts. And note, a post can be bad in the sense of "this is not the sort of post we as a community Love To See" without it being "summon the gods of justice" dire.
I will conclude with a funny story: twitch uses (or perhaps used, past tense) ML to detect when specific uncommon words in messages get manually moderated a lot, and assumes that's the hot new slur and begins moderating it automatically. Somehow, I don't know what happened, the word "Sega" got onto this list. I had to manually fish dozens of people out of automated ten-minute mutes for committing the crime of Sega on a video game discussion site.
And that button exists for a reason! You can and should click it if there's a good cause. But moderators often find themselves drained by a barrage of "posts/users that I didn't like" rather than "posts/users that are active hazards." They have to take the time to look around and make a judgment call on whether there's something they're missing and it really is a dire post, or just someone being annoyed by a personal pet peeve. I encourage you to keep the human element on the other end in mind when deciding what your personal threshold is for reporting posts. And note, a post can be bad in the sense of "this is not the sort of post we as a community Love To See" without it being "summon the gods of justice" dire.
I will conclude with a funny story: twitch uses (or perhaps used, past tense) ML to detect when specific uncommon words in messages get manually moderated a lot, and assumes that's the hot new slur and begins moderating it automatically. Somehow, I don't know what happened, the word "Sega" got onto this list. I had to manually fish dozens of people out of automated ten-minute mutes for committing the crime of Sega on a video game discussion site.