Jeff Jarvis on Nostr: "LLMs are trained on texts, not truths. Each text bears traces of its context, ...
"LLMs are trained on texts, not truths. Each text bears traces of its context, including its genre, voice, audience and the history and local politics of its place of origin.... Texts are ‘true’ only in the right context."
AI hallucinations are a feature of LLM design, not a bug
https://www.nature.com/articles/d41586-025-00662-7Published at
2025-03-08 12:27:02Event JSON
{
"id": "48531c69b4784954d7618c3ded9fba9fda8b00c9e45910514c0f093e6dca0215",
"pubkey": "9c199578de5aae3ae55e348d0fea56eab2f6db84bbcac64467642b7fa4314dbb",
"created_at": 1741436822,
"kind": 1,
"tags": [
[
"proxy",
"https://mastodon.social/users/jeffjarvis/statuses/114126803622098217",
"activitypub"
]
],
"content": "\"LLMs are trained on texts, not truths. Each text bears traces of its context, including its genre, voice, audience and the history and local politics of its place of origin.... Texts are ‘true’ only in the right context.\"\nAI hallucinations are a feature of LLM design, not a bug https://www.nature.com/articles/d41586-025-00662-7",
"sig": "b169976e3cf6947e2b2189450031d881a9bceabb3cac4b20ce1054db75759916d24d16cd004168c53f56dd6b90aa3e87e157b4242f9b92e2165d660e83a24822"
}