What is Nostr?
Guy /
npub1wuh…h6ym
2024-01-19 15:10:33

Guy on Nostr: When a GPT model responds to a question and we humans interpret the reply as ...

When a GPT model responds to a question and we humans interpret the reply as intelligent it is we who hallucinate. I don't mean these models are not useful, powerful, or admirable. They are wonderful. I mean the model is not thinking or reasoning in the first place therefore it is incapable of hallucinating. When the model reponds with an unsatisfactory answer it is doing nothing differently than when it responds with a satisfactory answer. We are humans. We think and reason. We have bodies and brains related to our minds. Granting the models the capacity to "hallucinate" gives a false impression that they had a right mind from which they deviated. So it is we humans who hallucinate and not the GPT models.
Author Public Key
npub1wuhe2323lkrxpyrl848v9km974euljlxerarfe3qldahqhynyjdqa0h6ym