Cory Doctorow on Nostr: Second, policies specify that humans can exercise *discretion* when they override the ...
Second, policies specify that humans can exercise *discretion* when they override the AI. They aren't just there to catch instances in which the AI misinterprets a rule, but rather to apply human judgment to the rules' applications.
Next, policies require human oversight to be "meaningful" - to be more than a rubber stamp. For high-stakes decisions, a human has to do a thorough review of the AI's inputs and output before greenlighting it.
11/
Published at
2024-10-30 12:48:24Event JSON
{
"id": "54865fb9046ca669d431a69a5e8986b709f2f68e9156221ae23a25d10d6bc229",
"pubkey": "21856daf84c2e4e505290eb25e3083b0545b8c03ea97b89831117cff09fadf0d",
"created_at": 1730292504,
"kind": 1,
"tags": [
[
"e",
"88d24fbca6acc096db48daaba7989460236e85c1b7ec3ab020308b9a4208c0a8",
"wss://relay.mostr.pub",
"reply"
],
[
"content-warning",
"Long thread/11"
],
[
"proxy",
"https://mamot.fr/users/pluralistic/statuses/113396449590212314",
"activitypub"
]
],
"content": "Second, policies specify that humans can exercise *discretion* when they override the AI. They aren't just there to catch instances in which the AI misinterprets a rule, but rather to apply human judgment to the rules' applications.\n\nNext, policies require human oversight to be \"meaningful\" - to be more than a rubber stamp. For high-stakes decisions, a human has to do a thorough review of the AI's inputs and output before greenlighting it.\n\n11/",
"sig": "6aab4e51431664acc795fd51a4ad68b5d8533fc7601d2f16c854bb57cceba6430337230bebbfd5183c387c48c1bf600d3bbf44f0c90040f149d45dbc80cfb6b9"
}