What is Nostr?
j-r conlin /
npub14w5…rkhr
2024-07-26 02:59:21

j-r conlin on Nostr: The fact that OpenAI's devs are disabling the "Ignore previous instructions…" hack ...

The fact that OpenAI's devs are disabling the "Ignore previous instructions…" hack for "safety reasons" tells me that they have no idea who's safety they need to preserve.

LLMs are being used to scam and fool people constantly, far more than being "helpful assistants".

For what it's worth, I have a tell word that I use to determine an LLM. If the devs actually gave a shit about user safety, they'd build in easy tells for folk.

(Hell, Bladerunner and West World were smart enough to do it.)
Author Public Key
npub14w5mxwzhk6hcpf4v466t95kk2hf4kreamrkkg5yw6gfe8murue0q3erkhr