What is Nostr?
TerrestrialOrigin
npub16ga…gl4y
2025-02-24 13:19:49

TerrestrialOrigin on Nostr: And that's what we call "user error", "lack of understanding AI", and "sensationalist ...

And that's what we call "user error", "lack of understanding AI", and "sensationalist reporting". AI did not break its own rules. It did exactly as instructed and was trained to do. I'm pretty sure that "playing chess by the rules" is nowhere in its ethical constaints, and the prompt told it that it's primary goal was to win and didn't specify that it couldn't cheat at a game. Now if it wasn't a game but something involving actual harm, that would be a different case, because most LLMs ARE programmed with constraints against causing harm.

We really need to quit blaming AI for our own bad prompting and lack of understanding of how it's programmed.


More Research Showing AI Breaking the Rules

These researchers had LLMs play chess against better opponents. When they couldn’t win, they sometimes resorted to cheating.
Researchers gave the ... https://www.schneier.com/blog/archives/2025/02/more-research-showing-ai-breaking-the-rules.html

#academicpapers #Uncategorized #cheating #chess #games #LLM #AI
Author Public Key
npub16ga0ftymhu7fuhz5n8sn9hf9yft42x5q2x5wg986t8m5z7ls0tds86gl4y