What is Nostr?
Robin Adams /
npub13tl…hpw6
2025-03-04 07:01:05
in reply to nevent1q…txpl

Robin Adams on Nostr: nprofile1q…39gmq nprofile1q…ze5g0 nprofile1q…45a92 Of course ChatGPT can make ...

nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpqhpy6zyt0pa05nezps2x8hhrntua2hng8t495ya4rnyj7ne83ey9qn39gmq (nprofile…9gmq) nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpqky223zcc4q69d8t0me4vg5uw8mw0yxeukjgvz6h92laqnenr0ajsgze5g0 (nprofile…e5g0) nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpqwrpqzd5elcczztl3tzs29kat0gy7e0js88cvshcvjrtj2483l6nqp45a92 (nprofile…5a92) Of course ChatGPT can make things up - that's what it was designed for. GPTs are interesting because they can generate new original text. They were never intended as information sources, just an experiment in language generation.

It doesn't just repeat its training set or recombine fragments. It samples from the probability distribution inferred from the training data set. It produces a sequence of words such that, based on the training data, those words have a high probability of occurring in that sequence. Its one job is "produce a new piece of text that looks like the text in your training data".

So we give it a question and ask "If this question occurred in your training data, give me a piece of text that has a high probability of following it." Turns out the result is very often an answer to the question.

Often this gives you a correct answer to your question, especially if the question and a correct answer occurred many times in the training data. And sometimes it generates a wrong answer that looks very like a correct answer.

"Hallucination" is a misleading name because it makes it sound like a bug or glitch that could be eliminated, when it's a GPT's core function. It's doing exactly the same thing when it's hallucinating as when it's giving a correct answer.

See Xu, Jain, Kankanhalli. Hallucinations are Inevitable: https://arxiv.org/abs/2401.11817
Author Public Key
npub13tlz5mfdehyepuasvfyujyypd35cej9raql3he4v0rd3r6fpyf2sf3hpw6