Cory Doctorow on Nostr: Whenever you hear AI bosses talking about how seriously they're taking a hypothetical ...
Whenever you hear AI bosses talking about how seriously they're taking a hypothetical risk, that's the moment when you should check in on whether they're doing anything about all these longstanding, *real* risks. And even as AI bosses promise to fight hypothetical election disinformation, they continue to downplay or ignore the non-hypothetical, here-and-now harms of AI.
17/
Published at
2024-02-27 13:34:48Event JSON
{
"id": "a266c593af463a1df4693f73667d2c29f5a302c6d317faf7df1922f3eb563968",
"pubkey": "21856daf84c2e4e505290eb25e3083b0545b8c03ea97b89831117cff09fadf0d",
"created_at": 1709040888,
"kind": 1,
"tags": [
[
"e",
"ac5154490be39ecf48a38def928a96e07a8e1600df8ab0a484733651d136009b",
"wss://relay.mostr.pub",
"reply"
],
[
"content-warning",
"Long thread/17"
],
[
"proxy",
"https://mamot.fr/users/pluralistic/statuses/112003703696554833",
"activitypub"
]
],
"content": "Whenever you hear AI bosses talking about how seriously they're taking a hypothetical risk, that's the moment when you should check in on whether they're doing anything about all these longstanding, *real* risks. And even as AI bosses promise to fight hypothetical election disinformation, they continue to downplay or ignore the non-hypothetical, here-and-now harms of AI.\n\n17/",
"sig": "92ab04882dd5b278be2b3fcc7a995de6d7c34f7be9e6a207a8e0bf567ea49791a5e6b12b3ec0741a0c50513813a0303e5f647cff3faf2bfb7e4609abe796e0f5"
}