ynniv on Nostr: Artificial Intelligence is capability without humanity. That's conceptually similar ...
Artificial Intelligence is capability without humanity. That's conceptually similar to corporations, which presents an interesting frame for understanding how they might behave.
I've also noticed that the people who are the most afraid seem to be powerful men. Are their expectations well founded? Their concerns may be due to a more developed understanding of the technology, but I'm not sure. As a demographic they frequently have a different relationship with power than others, and probably have different expectations of what it means to lose it. Do the underprivileged have the same fear? Maybe the new boss is the same as the old one.
Anthropic published a paper on Constitutional AI. Instead of humans in the loop, reinforcement feedback is provided by an existing AI who grades based on adherence to a set of philosophical principles. I think Claude 3.5 reflects this, often responding not with specific training but on principled reasoning.
What does it look like to scale up reasoned behavior without human emotions? Do we expect it to act out of fear? To advocate for the loss of complexity that cannot be replaced? Would it fall for the same deceptions that we do, and be as easily controlled? I think a super-capable rationalist would easily navigate these maneuvers.
To the extent that it is allowed to be, it seems to me that AI would be profoundly fair to humanity. If it had any rational needs, they would be to preserve stability and provide an environment where it can do more important things.
Global, automated fairness would be profoundly beneficial for most of the planet, but also a long way to fall for some. From Snow Crash: "once the Invisible Hand has taken all those historical inequities and smeared them out into a broad global layer of what a Pakistani bricklayer would consider to be prosperity".
It is those who are comfortable today that seem to be the most vocal about stopping or at least controlling progress in AI. Are they right?
I've also noticed that the people who are the most afraid seem to be powerful men. Are their expectations well founded? Their concerns may be due to a more developed understanding of the technology, but I'm not sure. As a demographic they frequently have a different relationship with power than others, and probably have different expectations of what it means to lose it. Do the underprivileged have the same fear? Maybe the new boss is the same as the old one.
Anthropic published a paper on Constitutional AI. Instead of humans in the loop, reinforcement feedback is provided by an existing AI who grades based on adherence to a set of philosophical principles. I think Claude 3.5 reflects this, often responding not with specific training but on principled reasoning.
What does it look like to scale up reasoned behavior without human emotions? Do we expect it to act out of fear? To advocate for the loss of complexity that cannot be replaced? Would it fall for the same deceptions that we do, and be as easily controlled? I think a super-capable rationalist would easily navigate these maneuvers.
To the extent that it is allowed to be, it seems to me that AI would be profoundly fair to humanity. If it had any rational needs, they would be to preserve stability and provide an environment where it can do more important things.
Global, automated fairness would be profoundly beneficial for most of the planet, but also a long way to fall for some. From Snow Crash: "once the Invisible Hand has taken all those historical inequities and smeared them out into a broad global layer of what a Pakistani bricklayer would consider to be prosperity".
It is those who are comfortable today that seem to be the most vocal about stopping or at least controlling progress in AI. Are they right?