Cory Doctorow on Nostr: The measures for using humans to prevent algorithmic harms represent theories, and ...
The measures for using humans to prevent algorithmic harms represent theories, and those theories are testable, and they have been tested, and they are wrong.
For example, people (including experts) are highly susceptible to "automation bias." They defer to automated systems, even when those systems produce outputs that conflict with their own expert experience and knowledge.
13/
Published at
2024-10-30 12:48:57Event JSON
{
"id": "d0a3fb1e2f2ce824b919ab99edc9c0875ca32a0535fe22059d678595547a5e8f",
"pubkey": "21856daf84c2e4e505290eb25e3083b0545b8c03ea97b89831117cff09fadf0d",
"created_at": 1730292537,
"kind": 1,
"tags": [
[
"e",
"433a2cea6f8142e3a071c722e05f88fcd09956dc3112de91b80526bf772b8d13",
"wss://relay.mostr.pub",
"reply"
],
[
"content-warning",
"Long thread/13"
],
[
"proxy",
"https://mamot.fr/users/pluralistic/statuses/113396451724572233",
"activitypub"
]
],
"content": "The measures for using humans to prevent algorithmic harms represent theories, and those theories are testable, and they have been tested, and they are wrong.\n\nFor example, people (including experts) are highly susceptible to \"automation bias.\" They defer to automated systems, even when those systems produce outputs that conflict with their own expert experience and knowledge. \n\n13/",
"sig": "4ffa272908e74294ad337bf9cdc2667e361d92cbc50286f8060c6fc794eb9ffc9e27ce6fa12d2bdc14871105d97269498eaebea55fc4440bce35a038f3272f11"
}