Hidde on Nostr: “The problem here isn't that large language models hallucinate, lie, or ...
“The problem here isn't that large language models hallucinate, lie, or misrepresent the world in some way. It's that they are not designed to represent the world at all; instead, they are designed to convey convincing lines of text.”
The paper details Frankfurt's interesting distinction between ‘soft bullshit’ and ‘hard bullshit’, reasoning that ChatGPT is definitely the former and in some specific cases the latter.
Published at
2024-06-29 21:05:05Event JSON
{
"id": "4fc8061755782cb02fb4cc12e03c30b3082ab19b6dde5b2c4a1c0e6a4fe1ae85",
"pubkey": "8cc47ed4727396e063acfebc190c3d9c069edc7c8b18076a6d25ac816c04de8a",
"created_at": 1719695105,
"kind": 1,
"tags": [
[
"e",
"90b8f1ca296c54b6c0dda8f079df3babb7727e7603670309959c4dba4818e941",
"wss://relay.mostr.pub",
"reply"
],
[
"proxy",
"https://front-end.social/users/hdv/statuses/112701938463804937",
"activitypub"
]
],
"content": "“The problem here isn't that large language models hallucinate, lie, or misrepresent the world in some way. It's that they are not designed to represent the world at all; instead, they are designed to convey convincing lines of text.”\n\nThe paper details Frankfurt's interesting distinction between ‘soft bullshit’ and ‘hard bullshit’, reasoning that ChatGPT is definitely the former and in some specific cases the latter.",
"sig": "3bbc41d90b61c3adb774315c3b03ce3cb30acb2f3ed5e3510d86f78d90892ea908a7effbcbb8d085aa5eeeb8bd5397ea7386775472a5eac52d76829ac39e373a"
}