buttercat1791 on Nostr: I've started reading through the article. I'll post thoughts as I go. First is that, ...
I've started reading through the article. I'll post thoughts as I go.
First is that, part of the reason Google retired their "don't be evil" slogan is because they didn't define "evil." For almost any situation, you can find someone who thinks a given course of action is evil. This leads to a moralistic paralysis.
The subsequent discussion on making "supervise" AI runs into the same problem. It can try to respond to the values of its users, but it is virtually guaranteed that there are contradictions in the different value systems such an AI would have to balance.
In short, we can't have ethical AI unless we chose an ethical system for it to adhere to, and no one wants to make that decision.
First is that, part of the reason Google retired their "don't be evil" slogan is because they didn't define "evil." For almost any situation, you can find someone who thinks a given course of action is evil. This leads to a moralistic paralysis.
The subsequent discussion on making "supervise" AI runs into the same problem. It can try to respond to the values of its users, but it is virtually guaranteed that there are contradictions in the different value systems such an AI would have to balance.
In short, we can't have ethical AI unless we chose an ethical system for it to adhere to, and no one wants to make that decision.