eob on Nostr: Some really interesting insights in this paper looking at "hallucination" in LLMs: ...
Some really interesting insights in this paper looking at "hallucination" in LLMs:
Creativity and hallucination are really the same kind of emergent behavior, differently named only based on whether they are desirable or not
https://medium.com/@sayandev.mukherjee/hallucinations-and-emergence-in-large-language-models-b54952a17972
Both creativity and hallucination are caused by training with such vast amounts of data that there are long-distance connections in the underlying "consent space" which make all concepts be directly linked to all other concepts
They can be avoided by limiting such long distance connections during the LLM training, so that each concept is only linked to concepts to which it has a reasonable close semantic connection
Creativity and hallucination are really the same kind of emergent behavior, differently named only based on whether they are desirable or not
https://medium.com/@sayandev.mukherjee/hallucinations-and-emergence-in-large-language-models-b54952a17972
Both creativity and hallucination are caused by training with such vast amounts of data that there are long-distance connections in the underlying "consent space" which make all concepts be directly linked to all other concepts
They can be avoided by limiting such long distance connections during the LLM training, so that each concept is only linked to concepts to which it has a reasonable close semantic connection