Cory Doctorow on Nostr: As Green writes, giving an AIhigh-stakes decisions, using humans in the loop to ...
As Green writes, giving an AIhigh-stakes decisions, using humans in the loop to prevent harm, produces a "perverse effect": "alleviating scrutiny of government algorithms without actually addressing the underlying concerns." A human in the loop creates "a false sense of security" so algorithms are deployed for high-stakes tasks, and it shifts responsibility for algorithmic failures to the human, creating what Dan Davies calls an "accountability sink":
https://profilebooks.com/work/the-unaccountability-machine/20/
Published at
2024-10-30 12:51:03Event JSON
{
"id": "602dcd14ba73cf92bb6270e7349fd80ce4e90f0c4c81e8991a22f4541c54d7a9",
"pubkey": "21856daf84c2e4e505290eb25e3083b0545b8c03ea97b89831117cff09fadf0d",
"created_at": 1730292663,
"kind": 1,
"tags": [
[
"e",
"33e259c03a83e6881cf019d19d04e32183cf30415b9113c4b5772deca2481ff4",
"wss://relay.mostr.pub",
"reply"
],
[
"content-warning",
"Long thread/20"
],
[
"proxy",
"https://mamot.fr/users/pluralistic/statuses/113396459991219486",
"activitypub"
]
],
"content": "As Green writes, giving an AIhigh-stakes decisions, using humans in the loop to prevent harm, produces a \"perverse effect\": \"alleviating scrutiny of government algorithms without actually addressing the underlying concerns.\" A human in the loop creates \"a false sense of security\" so algorithms are deployed for high-stakes tasks, and it shifts responsibility for algorithmic failures to the human, creating what Dan Davies calls an \"accountability sink\":\n\nhttps://profilebooks.com/work/the-unaccountability-machine/\n\n20/",
"sig": "fe6f726088f34217b7c6a6b4564679751ae61814c16ae467a0e1677b33d8013c76625611af834e87e94a51b38f31b69cedc7bf26b9f01d35551c3e90a95e2367"
}