Tom Morris on Nostr: When people point out the "hallucinaton" problem with LLMs, the response comes back: ...
When people point out the "hallucinaton" problem with LLMs, the response comes back: "it's not great for facts, but it's good at producing convincing prose".
I'm doubtful on that too.
I've tested a certain chatbot with a bunch of requests for persuasive language (academic essays, political speeches, sermons, work-related emails/letters) and it just churns out inanity ("innovative", "leverage" etc.) and cliche ("I have a passion for...").
Words disconnected from reality aren't very convincing.
I'm doubtful on that too.
I've tested a certain chatbot with a bunch of requests for persuasive language (academic essays, political speeches, sermons, work-related emails/letters) and it just churns out inanity ("innovative", "leverage" etc.) and cliche ("I have a passion for...").
Words disconnected from reality aren't very convincing.