dikaios1517 on Nostr: So, there are already a ton of ways to keep most garbage out of your feed that you ...
So, there are already a ton of ways to keep most garbage out of your feed that you don't want to see. Spammers and CSAM stuff is now being posted to popular hashtags and replies to folks posting in the Introductions tag, which is a much harder problem to solve for.
Moreover, relay runners and media hosts like nostr.build have a legal responsibility to report and delete such content. However, any tool built for finding and deleting CSAM can also be adapted for, say, finding and deleting any content promoting Bitcoin, or speaking negatively about the CCP, etc.
Most folks will say, "Not a problem. Just run your own relay that doesn't censor that content, or find a public relay that won't censor you." That's all well and good, and the devs recognize that blocking anything at the relay level is an exercise in futility, because there will always be a relay willing to not censor it.
As a result, though, they are looking for ways to block the content at the client level. Ways to have an image checked for CSAM by an AI before displaying that image to you. Sounds absolutely wonderful! Something I would absolutely want for blocking such content.
That said, the same tool could then be used to identify content speaking ill of the CCP and block it at the client level, so you don't see it regardless of what relays you have running.
The only saving grace we have here is that it would be very unlikely that every client would use these tools for the purpose of blocking content speaking ill of the CCP, even if they would and should implement it for blocking CSAM.
Nevertheless, clients could become a major point of failure for censorship resistance with such tools.
Moreover, relay runners and media hosts like nostr.build have a legal responsibility to report and delete such content. However, any tool built for finding and deleting CSAM can also be adapted for, say, finding and deleting any content promoting Bitcoin, or speaking negatively about the CCP, etc.
Most folks will say, "Not a problem. Just run your own relay that doesn't censor that content, or find a public relay that won't censor you." That's all well and good, and the devs recognize that blocking anything at the relay level is an exercise in futility, because there will always be a relay willing to not censor it.
As a result, though, they are looking for ways to block the content at the client level. Ways to have an image checked for CSAM by an AI before displaying that image to you. Sounds absolutely wonderful! Something I would absolutely want for blocking such content.
That said, the same tool could then be used to identify content speaking ill of the CCP and block it at the client level, so you don't see it regardless of what relays you have running.
The only saving grace we have here is that it would be very unlikely that every client would use these tools for the purpose of blocking content speaking ill of the CCP, even if they would and should implement it for blocking CSAM.
Nevertheless, clients could become a major point of failure for censorship resistance with such tools.