What is Nostr?
TheGuySwann / Guy Swann
npub1h8n…rpev
2024-08-19 14:59:56

TheGuySwann on Nostr: Why I think the idea of “superintelligence” and “AGI” is HEAVILY exaggerated ...

Why I think the idea of “superintelligence” and “AGI” is HEAVILY exaggerated or misunderstood:

Assuming we have Ai much smarter than the average human, smarter than the typical PhD (granted smart and “PhD” are not at all equal but for the sake of simplicity). If (or when) this occurs, this will not mean Ai will just be able to invent whatever we need or make all decisions better than anyone else. And I think all we have to do is look at humans to make this simple assessment —

• If we asked a physicist and a biologist what was the most important thing to focus our time and resources on, do you suspect the physicist would find something related to physics and the biologist would find something biological?

This points to the question of speciality. What an Ai is trained on will determine what and how it values things, and there is no amount of information that will make it perfect and forever aligned with the truth at all times. It will always have a weight toward something, because the question of WHAT to value for training and for dedicating resources is present at all stages. It presupposes that we already have the answer if we assume Ai will just magically come up with it.

• in addition, the answer to “where should we devote resources” isn’t static. It changes year to year, month to month, even minute to minute sometimes. It is a question of value and judgement. The only way to sort out this relationship is through trade and competition, denoting the **necessity** of Ai that compete and exchange data and resources.

• General intelligence is useful, but extremely inefficient. Generalists are great to have for combining and relating ideas, but specialists still down into the true details and do the dirty work of real building and fine tuning of the world. Specialization isn’t just an economic phenomenon, it’s a physical reality of the universe. It will be the same with Ai, because Ai doesn’t defy universal laws, it’s just a computer program.
— A giant, trillion dollar cluster AGI will not be as valuable or produce nearly as good results or decision making capability as 10,000 much smaller and specialized AI’s focused on their own corner and trading resources with others to accomplish their task or test the ideas or paths of progress apparent from their vantage point. Nothing in nature resembles the former.

• Intelligence isn’t an omnipotent, unrestricted power. Mental intelligence isn’t the only kind of intelligence. I think as humans we have become deeply arrogant about the fact that we are “so smart” and we have begun to worship our own intelligence in such a way that if we ever imagine something smarter, then it MUST be God and it must be without any limits or flaws at all. Yet there is nothing to suggest this. The “smartest” people today often have the greatest blinders on, and everyone is only as good as the information they have and the values lens through which they see everything.

While the intelligence explosion will be shockingly disruptive and revolutionary in many ways, and while I do see it as an extremely likely outcome in the rather near future, I think the vision of a giant, all powerful AGI dropped on the world like a nuclear bomb is increasingly a projection of our own ignorance and arrogance. It simply doesn’t hold water, imo.

Covered a lot of these ideas in the 31st episode of Ai Unchained:

https://fountain.fm/episode/98UjiXJsa1b2VusbQQur
Author Public Key
npub1h8nk2346qezka5cpm8jjh3yl5j88pf4ly2ptu7s6uu55wcfqy0wq36rpev