What is Nostr?
Cory Doctorow /
npub1yxz…5qvv
2024-10-30 12:51:03
in reply to nevent1q…ajsu

Cory Doctorow on Nostr: As Green writes, giving an AIhigh-stakes decisions, using humans in the loop to ...

As Green writes, giving an AIhigh-stakes decisions, using humans in the loop to prevent harm, produces a "perverse effect": "alleviating scrutiny of government algorithms without actually addressing the underlying concerns." A human in the loop creates "a false sense of security" so algorithms are deployed for high-stakes tasks, and it shifts responsibility for algorithmic failures to the human, creating what Dan Davies calls an "accountability sink":

https://profilebooks.com/work/the-unaccountability-machine/

20/
Author Public Key
npub1yxzkmtuyctjw2pffp6e9uvyrkp29hrqra2tm3xp3z9707z06muxsg75qvv