laanwj on Nostr: humans also (obviously) make mistakes, but that's not really what i mean here what ...
humans also (obviously) make mistakes, but that's not really what i mean here
what LLMs tend to do is make something up that sounds convincing, hallucinate an extra API call, or specific properties/guarantees, when it doesn't know, even make up fake authorative-sounding references and http links
a human will usually not descend into that level of bullshitting !
and if they do, you'd be right to see them as untrustworthy and hold them responsible, can't do that to LLM companies as you can be sure they've covered that in the fine print
what LLMs tend to do is make something up that sounds convincing, hallucinate an extra API call, or specific properties/guarantees, when it doesn't know, even make up fake authorative-sounding references and http links
a human will usually not descend into that level of bullshitting !
and if they do, you'd be right to see them as untrustworthy and hold them responsible, can't do that to LLM companies as you can be sure they've covered that in the fine print