What is Nostr?
Cory Doctorow /
npub1yxz…5qvv
2024-10-30 12:48:57
in reply to nevent1q…rc44

Cory Doctorow on Nostr: The measures for using humans to prevent algorithmic harms represent theories, and ...

The measures for using humans to prevent algorithmic harms represent theories, and those theories are testable, and they have been tested, and they are wrong.

For example, people (including experts) are highly susceptible to "automation bias." They defer to automated systems, even when those systems produce outputs that conflict with their own expert experience and knowledge.

13/
Author Public Key
npub1yxzkmtuyctjw2pffp6e9uvyrkp29hrqra2tm3xp3z9707z06muxsg75qvv