Mark Pesce on Nostr: Detecting hallucinations in large language models using semantic entropy "...By ...
Detecting hallucinations in large language models using semantic entropy
"...By detecting when a prompt is likely to produce a confabulation, our method helps users understand when they must take extra care with LLMs and opens up new possibilities for using LLMs that are otherwise prevented by their unreliability..."
https://www.nature.com/articles/s41586-024-07421-0
"...By detecting when a prompt is likely to produce a confabulation, our method helps users understand when they must take extra care with LLMs and opens up new possibilities for using LLMs that are otherwise prevented by their unreliability..."
https://www.nature.com/articles/s41586-024-07421-0