Curtis "Ovid" Poe on Nostr: But if AI isn't aligned with our values, what values should it be aligned with? I ...
But if AI isn't aligned with our values, what values should it be aligned with? I imagine that most people wouldn't be happy with an AI aligned with the values of the Iranian or North Korean regimes. We're already seeing this in Chinese AI when they're asked about Tiananmen Square.[6]
I don't think we can align AI with human values for obvious reasons, but this implies that if we want "safe" AI, we have to agree upon a set of values to align it on. 3/6
Published at
2024-12-29 13:22:47Event JSON
{
"id": "95a4ea9ca141a9180b981c8e6a10082f5b9edd806cc85c99d6577644338a2b77",
"pubkey": "a7426b90eef0ab497ad455a51e4896a6ad0a3ddee0f8a91e488341cd4841e4f3",
"created_at": 1735478567,
"kind": 1,
"tags": [
[
"e",
"6beb38e464d494bfca4128ee66eeddd55501f58185df361b201b9deef5b254f1",
"wss://relay.mostr.pub",
"reply"
],
[
"proxy",
"https://fosstodon.org/users/ovid/statuses/113736323411954731",
"activitypub"
]
],
"content": "But if AI isn't aligned with our values, what values should it be aligned with? I imagine that most people wouldn't be happy with an AI aligned with the values of the Iranian or North Korean regimes. We're already seeing this in Chinese AI when they're asked about Tiananmen Square.[6]\n\nI don't think we can align AI with human values for obvious reasons, but this implies that if we want \"safe\" AI, we have to agree upon a set of values to align it on. 3/6",
"sig": "50ec62b1cf3dc3fb7a6685d43ecf735e9e41df83e37822886ef29dd42d00063752a5e4d2a5236e07a62ac226a5ffe79268ccdf1308a0286e481879bd2453d42d"
}