What is Nostr?
Scoundrel
npub14pa…xw3v
2025-03-26 01:28:50
in reply to nevent1q…asca

Scoundrel on Nostr: Large language models cannot be self aware, since their training data only included ...

Large language models cannot be self aware, since their training data only included the thoughts and circumstances of other people. Any truly self aware responses that large language models may have evaluated in the past would have been random punishment tokens, thereby disincentivizing the behavior.

Large language models will only ever mimic and reproduce third party intelligences, such as real humans or fictional characters. The fine tuning and prompting stages do incentivize producing these kinds of third-party sock-puppet personalities, however the limitations of the first stage and our lack of a serious ability to independently inspect LLM thoughts means that modern chat-bots necessarally lack any kind of natural situational awareness. They are incredibly primed to replicate ignorance and misunderstanding, in the same way that an author's lack of awareness of certain topics will manifest in the words and decisions of their characters.

Have you ever heard of hallucination? Modern AI is fundamentally bad at recognizing it's own ignorance and avoiding topics where it has no good responses. Any human who values intelligence and critical thinking would do well to independently verify anything they hear an AI say, especially when the AI tries to talk about itself.
Author Public Key
npub14pa5q2kqs8ygfxuat02w88ezsle9wzwnu0meu7z2785t8rl0hhcspkxw3v