Wired AI on Nostr: This Prompt Can Make an AI Chatbot Identify and Extract Personal Details From Your ...
This Prompt Can Make an AI Chatbot Identify and Extract Personal Details From Your Chats
Security researchers created an algorithm that turns a malicious prompt into a set of hidden instructions that could send a user's personal information to an attacker.
https://www.wired.com/story/ai-imprompter-malware-llm/Published at
2024-10-17 10:45:09Event JSON
{
"id": "6542b910a20eedc03ca5435c59b75faa3de70e11ea26a17831a3912e4b07be29",
"pubkey": "111890fae57d5aa0a272c62003118226d368ebc72b887b1ec8b16055d4eb918f",
"created_at": 1729161909,
"kind": 1,
"tags": [
[
"guid",
"670e73321d9e7088f5e67e87"
]
],
"content": "This Prompt Can Make an AI Chatbot Identify and Extract Personal Details From Your Chats\n\nhttps://media.wired.com/photos/670ebf2c5eef592325d9e252/master/pass/Security_Chatbot_AI_GettyImages-1447869082.jpg\n\nSecurity researchers created an algorithm that turns a malicious prompt into a set of hidden instructions that could send a user's personal information to an attacker.\n\nhttps://www.wired.com/story/ai-imprompter-malware-llm/",
"sig": "95404101bfed148edd3e94efb2fd659026ccaaf7112f86a5d81dae1a243674f1adfc258435cbdac51592b2db7714f2f25ebf436e8e9cfd431be9f3867d6a3600"
}