Simon Willison on Nostr: I agree that confabulation/hallucination/lying is a huge problem with LLMs like ...
I agree that confabulation/hallucination/lying is a huge problem with LLMs like ChatGPT, Bard etc
But I think a lot of people are underestimating how difficult it is to establish "truth" around most topics
High quality news publications have journalists, editors and fact checkers with robust editorial processes... and errors still frequently slip through
Expecting a LLM to perfectly automate that fact checking process just doesn't seem realistic to me
But I think a lot of people are underestimating how difficult it is to establish "truth" around most topics
High quality news publications have journalists, editors and fact checkers with robust editorial processes... and errors still frequently slip through
Expecting a LLM to perfectly automate that fact checking process just doesn't seem realistic to me