What is Nostr?
TauAs /
npub1sqr…qny9
2024-03-07 17:33:12

TauAs on Nostr: Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety ...

Researchers jailbreak AI chatbots with ASCII art -- ArtPrompt bypasses safety measures to unlock malicious queries
News
By Mark Tyson published about 2 hours ago
ArtPrompt bypassed safety measures in ChatGPT, Gemini, Clause, and Llama2.

https://www.tomshardware.com/tech-industry/artificial-intelligence/researchers-jailbreak-ai-chatbots-with-ascii-art-artprompt-bypasses-safety-measures-to-unlock-malicious-queries
Author Public Key
npub1sqruqyuv6ma3xunhcrzcn52425me99rtst5hgaa7y7mxgaf8luesxhqny9