Anthony Accioly on Nostr: I like this idea. It protects against attacks and encourages good behaviour without ...
I like this idea. It protects against attacks and encourages good behaviour without excluding new users.
Since you have asked for suggestions I couldn't resist writing a wall of text (sorry :)).
Like you, I’m not sure about automatically "nuking" accounts. One thing is certain though: automated moderation should be applied in steps, and banning should be a last resort.
For example, here’s an idea: First, mark all notes from an offending account as sensitive and severely rate limit it (e.g., limit Kind 1 notes to one every 30 minutes or so). Repeated dropped messages due to rate limiting should decrease the account score even further (but be careful here, as I've seen algorithms misbehave due to technical issues outside of the account holder's control). If the bad behaviour persists, stop propagating notes from this account for a fixed amount of time, say 48 hours. Also, record the account’s IP address. If multiple accounts using the same IP are misbehaving, then start dropping all messages coming from this IP for a longer period of time, e.g., one or two weeks.
Permanently banning an account or IP from a relay should be a last-resort manual action. I encourage a mechanism for community moderation, similar to Stack Overflow, so that not all of the onus falls on relay administrators. Community moderation would be more complex and would likely require a new NIP with a few new types of notes. One idea would be to allow trusted/high-reputation users to "vote" on the fate of an account after a certain number of reports. For instance, they could be sent a sample of the account’s notes and aggregate statistics, and vote to either "absolve" the account or impose a longer temporary (e.g., one month) or permanent ban. A minimum odd number of votes (e.g., five) would be required to take action, with the majority ruling. IP bans should probably be left only to moderators and highly trusted users. This group can also manually suspend or unsuspend accounts.
I’ve seen this type of system work well before. It’s highly effective at automatically mitigating spam and antisocial behaviour while giving users a fair(er/ishy) chance and encouraging community moderation. It also avoids Mastodon’s current curse, whith server admins burning out and giving up due to the sheer volume of moderation work on their plates.
Hopefully, this is helpful. I understand that such a system would be complex to implement and still vulnerable to abuse (community moderation is far from a solved problem). However, like most people-related issues, it’s a complex challenge that requires thoughtful solutions.
Let me know if I can help in any way.
Since you have asked for suggestions I couldn't resist writing a wall of text (sorry :)).
Like you, I’m not sure about automatically "nuking" accounts. One thing is certain though: automated moderation should be applied in steps, and banning should be a last resort.
For example, here’s an idea: First, mark all notes from an offending account as sensitive and severely rate limit it (e.g., limit Kind 1 notes to one every 30 minutes or so). Repeated dropped messages due to rate limiting should decrease the account score even further (but be careful here, as I've seen algorithms misbehave due to technical issues outside of the account holder's control). If the bad behaviour persists, stop propagating notes from this account for a fixed amount of time, say 48 hours. Also, record the account’s IP address. If multiple accounts using the same IP are misbehaving, then start dropping all messages coming from this IP for a longer period of time, e.g., one or two weeks.
Permanently banning an account or IP from a relay should be a last-resort manual action. I encourage a mechanism for community moderation, similar to Stack Overflow, so that not all of the onus falls on relay administrators. Community moderation would be more complex and would likely require a new NIP with a few new types of notes. One idea would be to allow trusted/high-reputation users to "vote" on the fate of an account after a certain number of reports. For instance, they could be sent a sample of the account’s notes and aggregate statistics, and vote to either "absolve" the account or impose a longer temporary (e.g., one month) or permanent ban. A minimum odd number of votes (e.g., five) would be required to take action, with the majority ruling. IP bans should probably be left only to moderators and highly trusted users. This group can also manually suspend or unsuspend accounts.
I’ve seen this type of system work well before. It’s highly effective at automatically mitigating spam and antisocial behaviour while giving users a fair(er/ishy) chance and encouraging community moderation. It also avoids Mastodon’s current curse, whith server admins burning out and giving up due to the sheer volume of moderation work on their plates.
Hopefully, this is helpful. I understand that such a system would be complex to implement and still vulnerable to abuse (community moderation is far from a solved problem). However, like most people-related issues, it’s a complex challenge that requires thoughtful solutions.
Let me know if I can help in any way.