Jeremiah Lee on Nostr: Paraphrased conclusion: Calling LLM inaccuracies ‘hallucinations’ feeds in to ...
Paraphrased conclusion: Calling LLM inaccuracies ‘hallucinations’ feeds in to hype about their abilities among technology cheerleaders. It can also lead to the wrong attitude towards the machine when it gets things right: the inaccuracies show that it is bullshitting, even when it’s right. Calling these inaccuracies ‘bullshit’ isn’t just more accurate; it’s good science and technology communication.
https://link.springer.com/article/10.1007/s10676-024-09775-5#AI #ML #longRead
Published at
2024-09-16 15:53:33Event JSON
{
"id": "804a352f097952abcb80db10bfc08b9c1f4f9d7fd09cae6f6441dff3798b761d",
"pubkey": "bc496a7f99d69c2e9bf66fcb9e398bd7b1f7ae733ad003e6b8488debaf0a289a",
"created_at": 1726502013,
"kind": 1,
"tags": [
[
"t",
"ai"
],
[
"t",
"ml"
],
[
"t",
"longread"
],
[
"proxy",
"https://alpaca.gold/users/Jeremiah/statuses/113148035985072137",
"activitypub"
]
],
"content": "Paraphrased conclusion: Calling LLM inaccuracies ‘hallucinations’ feeds in to hype about their abilities among technology cheerleaders. It can also lead to the wrong attitude towards the machine when it gets things right: the inaccuracies show that it is bullshitting, even when it’s right. Calling these inaccuracies ‘bullshit’ isn’t just more accurate; it’s good science and technology communication.\n\nhttps://link.springer.com/article/10.1007/s10676-024-09775-5\n\n#AI #ML #longRead",
"sig": "f50d56ed42aff67e589e4cda27f779be5f8e56ef3969fc4faf00a8757d5fdfa40daa9905b911726ebce5af270dde4909445e0ff5fc66977fb01276b69ea3269f"
}