LynAlden on Nostr: Here’s an observation about shitty Twitter algorithms. I’ve actually never ...
Here’s an observation about shitty Twitter algorithms.
I’ve actually never blocked or muted anyone on Twitter. Never felt the need. 690k followers, countless comments, no filters.
If someone is an ass, I tend to just ignore them or akido them and move on.
I just went over to Twitter and checked my notifications. Some guy posted in an unusually negative way in one of my threads. For a brief moment, I was provoked. But then I looked: he has 8,700 posts and 6 followers. Briefly skimming his profile, it is pure negativity. Imagine this. Like actually take a moment to think about what that process feels like for him, let alone how he impacts others.
Posting eight thousand and seven hundred times, mostly negatively, and after well more than a thousand of those posts, someone elects to follow him.
The algorithm trains us to see this and get angry. When he shows up in our feed, he seems like a normal person who disagrees with us. But he’s not normal. Someone like that is literally and sadly more in the mentally ill camp, even as the algorithm presented him to us like any other normal person, saying we suck.
Imagine if we had more programmable filters and algorithms. Like, mute people with over a thousand posts but with less than one follower per five hundred posts. That filters him out, similarly to how we would visually filter out and thus physically avoid a man holding his own shit in his hand in public on a street, who needs help but not public attention and proximity.
The centralized algorithms we have normalized, are not real life.
We give people virtual access that we would not do publicly, partially because we can program our real-life algorithms with various behavior rules that we can’t do on most virtual platforms.
I’ve actually never blocked or muted anyone on Twitter. Never felt the need. 690k followers, countless comments, no filters.
If someone is an ass, I tend to just ignore them or akido them and move on.
I just went over to Twitter and checked my notifications. Some guy posted in an unusually negative way in one of my threads. For a brief moment, I was provoked. But then I looked: he has 8,700 posts and 6 followers. Briefly skimming his profile, it is pure negativity. Imagine this. Like actually take a moment to think about what that process feels like for him, let alone how he impacts others.
Posting eight thousand and seven hundred times, mostly negatively, and after well more than a thousand of those posts, someone elects to follow him.
The algorithm trains us to see this and get angry. When he shows up in our feed, he seems like a normal person who disagrees with us. But he’s not normal. Someone like that is literally and sadly more in the mentally ill camp, even as the algorithm presented him to us like any other normal person, saying we suck.
Imagine if we had more programmable filters and algorithms. Like, mute people with over a thousand posts but with less than one follower per five hundred posts. That filters him out, similarly to how we would visually filter out and thus physically avoid a man holding his own shit in his hand in public on a street, who needs help but not public attention and proximity.
The centralized algorithms we have normalized, are not real life.
We give people virtual access that we would not do publicly, partially because we can program our real-life algorithms with various behavior rules that we can’t do on most virtual platforms.