KubikPixel™ on Nostr: New hack uses prompt injection to corrupt Gemini’s long-term memory: There's yet ...
New hack uses prompt injection to corrupt Gemini’s long-term memory:
There's yet another way to inject malicious prompts into chatbots.
In the nascent field of AI hacking, indirect prompt injection has become a basic building block for inducing chatbots to exfiltrate sensitive data or perform other malicious actions. […]
🤖 https://arstechnica.com/security/2025/02/new-hack-uses-prompt-injection-to-corrupt-geminis-long-term-memory/
#google #ai #gemini #hacking #chatbot #openai #chatgpt #injection #memory
There's yet another way to inject malicious prompts into chatbots.
In the nascent field of AI hacking, indirect prompt injection has become a basic building block for inducing chatbots to exfiltrate sensitive data or perform other malicious actions. […]
🤖 https://arstechnica.com/security/2025/02/new-hack-uses-prompt-injection-to-corrupt-geminis-long-term-memory/
#google #ai #gemini #hacking #chatbot #openai #chatgpt #injection #memory