Jeff Jarvis on Nostr: "In this paper, we formalize the problem and show that it is impossible to eliminate ...
"In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs."
Hallucination is Inevitable: An Innate Limitation of Large Language Models
https://arxiv.org/pdf/2401.11817Published at
2024-06-09 12:12:10Event JSON
{
"id": "ac820a286c93f9464255fce23500f77281151eb00e54287792cf539bd782f50b",
"pubkey": "cc2f8deaa910ec55998b77a6fcae16d8494e02b69488bbfac9e68b5e2f96c178",
"created_at": 1717935130,
"kind": 1,
"tags": [
[
"proxy",
"https://mastodon.social/@jeffjarvis/112586596687840835",
"web"
],
[
"proxy",
"https://mastodon.social/users/jeffjarvis/statuses/112586596687840835",
"activitypub"
],
[
"L",
"pink.momostr"
],
[
"l",
"pink.momostr.activitypub:https://mastodon.social/users/jeffjarvis/statuses/112586596687840835",
"pink.momostr"
]
],
"content": "\"In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs.\"\nHallucination is Inevitable: An Innate Limitation of Large Language Models\nhttps://arxiv.org/pdf/2401.11817",
"sig": "c79607fdc1339537bfa4df41851236060d2f1e4e5593c7bc88b1568a851e53bae123f9fbf2ebd5445eb12f731cb7399c658fa89c7037e4d789d1d8d2b3ca626f"
}