Aljoscha Rittner (beandev) on Nostr: >>Within three months of the rollout, Rehberger found that memories could be created ...
>>Within three months of the rollout, Rehberger found that memories could be created and permanently stored through indirect prompt injection, an AI exploit that causes an #LLM to follow instructions from untrusted content such as emails, blog posts, or documents.
https://arstechnica.com/security/2024/09/false-memories-planted-in-chatgpt-give-hacker-persistent-exfiltration-channel/
#InfoSec #OpenAI #ChatGPT
https://arstechnica.com/security/2024/09/false-memories-planted-in-chatgpt-give-hacker-persistent-exfiltration-channel/
#InfoSec #OpenAI #ChatGPT