Guy on Nostr: When a GPT model responds to a question and we humans interpret the reply as ...
When a GPT model responds to a question and we humans interpret the reply as intelligent it is we who hallucinate. I don't mean these models are not useful, powerful, or admirable. They are wonderful. I mean the model is not thinking or reasoning in the first place therefore it is incapable of hallucinating. When the model reponds with an unsatisfactory answer it is doing nothing differently than when it responds with a satisfactory answer. We are humans. We think and reason. We have bodies and brains related to our minds. Granting the models the capacity to "hallucinate" gives a false impression that they had a right mind from which they deviated. So it is we humans who hallucinate and not the GPT models.
Published at
2024-01-19 15:10:33Event JSON
{
"id": "00000026cd329fc00312731e0fe539e8c51162727a8cee8832ac17780fc87964",
"pubkey": "772f954551fd8660907f3d4ec2db65f573cfcbe6c8fa34e620fb7b705c93249a",
"created_at": 1705677033,
"kind": 1,
"tags": [
[
"nonce",
"7686143364046002127",
"23"
]
],
"content": "When a GPT model responds to a question and we humans interpret the reply as intelligent it is we who hallucinate. I don't mean these models are not useful, powerful, or admirable. They are wonderful. I mean the model is not thinking or reasoning in the first place therefore it is incapable of hallucinating. When the model reponds with an unsatisfactory answer it is doing nothing differently than when it responds with a satisfactory answer. We are humans. We think and reason. We have bodies and brains related to our minds. Granting the models the capacity to \"hallucinate\" gives a false impression that they had a right mind from which they deviated. So it is we humans who hallucinate and not the GPT models.",
"sig": "26d35b6b3744c6cf8dc0f56744ca8530a91e77e31498268ba5ab0d37373aebe6c9395eb63470ca6a80ed7ddcd4172527e2954424d53ae1461c5137896d5768e4"
}