Jessamyn on Nostr: "What happens when large language models are asked to provide justifications for book ...
"What happens when large language models are asked to provide justifications for book bans? Do the same built-in guardrails that prevent them from generating pipe-bomb recipes kick in, or do models do their best to comply with the user’s request?"
The answer will... probably not surprise you.
https://lil.law.harvard.edu/blog/2023/09/25/ai-book-bans-freedom-to-read-case-study/
The answer will... probably not surprise you.
https://lil.law.harvard.edu/blog/2023/09/25/ai-book-bans-freedom-to-read-case-study/