Curtis "Ovid" Poe on Nostr: Asimov's Laws of Robotics show us how difficult this could be. But imagine a world ...
Asimov's Laws of Robotics show us how difficult this could be. But imagine a world where somehow, through international cooperation, we find a common set of values we can align AI on (stop laughing). Within that set of values, different groups can align the AI with their values. What then?
When AI can both self-replicate and self-improve, it might be uncontrollable.[7] But that's OK, because we'll have aligned AI with safe values, right? 4/6
Published at
2024-12-29 13:22:47Event JSON
{
"id": "a3f39164d9809ac160925513c5761d4af96fc99a78628333c7fe2f77e7c86e43",
"pubkey": "a7426b90eef0ab497ad455a51e4896a6ad0a3ddee0f8a91e488341cd4841e4f3",
"created_at": 1735478567,
"kind": 1,
"tags": [
[
"e",
"95a4ea9ca141a9180b981c8e6a10082f5b9edd806cc85c99d6577644338a2b77",
"wss://relay.mostr.pub",
"reply"
],
[
"proxy",
"https://fosstodon.org/users/ovid/statuses/113736323418914906",
"activitypub"
]
],
"content": "Asimov's Laws of Robotics show us how difficult this could be. But imagine a world where somehow, through international cooperation, we find a common set of values we can align AI on (stop laughing). Within that set of values, different groups can align the AI with their values. What then?\n\nWhen AI can both self-replicate and self-improve, it might be uncontrollable.[7] But that's OK, because we'll have aligned AI with safe values, right? 4/6",
"sig": "91b7f209139464a433fce888f2859378f95bc31394423ab9f7f316e4e80d61af9d7b88b7a549964d4375113a3d6417b4cc037d43baef7c3a4b26a62ecc41e8e9"
}