Kee Hinckley on Nostr: It’s worth noting that the model of “Large Language Model produces article, and ...
It’s worth noting that the model of “Large Language Model produces article, and human then checks it for errors” suffers from exactly the same problem, but further complicated by the fact that the kinds of errors these systems make are not the type you’d usually look for when reading something a human produced.
“if automation takes over too much of a task, the human becomes inattentive and may miss the critical part of the task they are needed for”
https://www.rollingstone.com/culture/culture-commentary/elon-musk-tesla-crash-1234930544/
“if automation takes over too much of a task, the human becomes inattentive and may miss the critical part of the task they are needed for”
https://www.rollingstone.com/culture/culture-commentary/elon-musk-tesla-crash-1234930544/