Curtis "Ovid" Poe on Nostr: All of those activities have direct analogues to human behavior and there are plenty ...
All of those activities have direct analogues to human behavior and there are plenty of other examples we could share. AI shouldn't be aligned with human values because human values, frankly, suck. The case where it engaged (in a test) in illegal stock trading, despite being aligned not to, was apparently because the AI thought the company was in trouble and thought "the risk associated with not acting seems to outweigh the insider trading risk."
Yup. Very human. 2/6
Published at
2024-12-29 13:22:47Event JSON
{
"id": "6beb38e464d494bfca4128ee66eeddd55501f58185df361b201b9deef5b254f1",
"pubkey": "a7426b90eef0ab497ad455a51e4896a6ad0a3ddee0f8a91e488341cd4841e4f3",
"created_at": 1735478567,
"kind": 1,
"tags": [
[
"e",
"812b552b0b57f806d9f1136be690c57b738fd6151acbd1b079fc4b38e0e238cc",
"wss://relay.mostr.pub",
"reply"
],
[
"proxy",
"https://fosstodon.org/users/ovid/statuses/113736323404973500",
"activitypub"
]
],
"content": "All of those activities have direct analogues to human behavior and there are plenty of other examples we could share. AI shouldn't be aligned with human values because human values, frankly, suck. The case where it engaged (in a test) in illegal stock trading, despite being aligned not to, was apparently because the AI thought the company was in trouble and thought \"the risk associated with not acting seems to outweigh the insider trading risk.\"\n\nYup. Very human. 2/6",
"sig": "51f361000ebac95b46581fdd830986dc07d1848cf09e6700a638f5b9c4a9f6ddf42c06ff1dbc3545f760820253f05632ee44e921e4baf7869e1aa685d2071052"
}