Michael Veale on Nostr: Wild to me that Apple's 'Sensitive Content Analysis Framework', which it uses in ...
Wild to me that Apple's 'Sensitive Content Analysis Framework', which it uses in childrens' iMessage accounts and invites developers to use to filter explicit images/videos via an API, *does not even return an uncertainty metric to developers* in the API call — just a boolean true or false. Has anyone actually studied over-flagging and blocking, particularly for LGBTQ content or in terms of gender or ethnicity, in this system?
Published at
2024-02-26 10:07:10Event JSON
{
"id": "c0ec6476254880d40a327438e02cac6a3bd99d9d078e0bc8ad24318bb090da1d",
"pubkey": "574cc5bb79c479b132a05a9a813f760a49b1410e7a702a88ec0080de1f82d843",
"created_at": 1708942030,
"kind": 1,
"tags": [
[
"proxy",
"https://someone.elses.computer/users/mikarv/statuses/111997224895426308",
"activitypub"
]
],
"content": "Wild to me that Apple's 'Sensitive Content Analysis Framework', which it uses in childrens' iMessage accounts and invites developers to use to filter explicit images/videos via an API, *does not even return an uncertainty metric to developers* in the API call — just a boolean true or false. Has anyone actually studied over-flagging and blocking, particularly for LGBTQ content or in terms of gender or ethnicity, in this system?",
"sig": "7de83e8dfd6b3b269e1a4f5577b016f0f3a4264bc3ff7970169ec06f8aecb5ca826518a0b6039a07dd42f6309d7e7c16eed96970f7ed9b34962b1edae0088752"
}