Sean on Nostr: This is such an interesting perspective for the future of AI. The perspective is from ...
This is such an interesting perspective for the future of AI. The perspective is from Mo Gawdat, former CBO at Google X. He resigned because a machine learnt how to pick up a yellow toy, then taught it's mates. He didn't like what it did, even though that was the aim in the first place.
The interesting perspective isn't the innovative mindset he portrays, but the complete closed mindedness of such a senior executive at a 'pinacle' entity.
Every single point of view is that of a centralised perspective. Which I wasn't suprised at. At all.
I picked the book up to take a peak into the mind of what most people would see as a genius at work.
I think we put these people on pedestals like celebrities. FAANGs are only innovative in the art of oppression.
The best advice he gives to stop humanity from fucking it all up with AI, is to stop giving it bad prompts.
Bad prompts..
He makes sure we know it's not the developers who will fuck it up, but us -we must be better humans. (Comply or Die)
That's all great in a rose tinted view of the world. But it's not going to happen, it's plain to see.
And I felt violated (even more so than the tracking Google does when it abuses my data for it's own selfish interest) by the narrow minded approach towards, what he describes, as what might be the end of humanity, because AI is smarter than us dumb humans.
The advice involves, posting better on Facebook, LinkedIn, Instagram and the likes, so we can treat the algorithm better. Because we are teaching the AI with what we post.
I've got a better option. Delete all centralised algorithmic platforms that zap away your energy which you won't get back. Then you don't risk giving anything to these behemoths that might destroy us in the first place.
I think the book was written for the AI when they eventually take over, too. Like he wrote it as a form of social equity for the AI. At one point he names the AI 'Smartie', as if it's his child.
I'm perplexed.
The interesting perspective isn't the innovative mindset he portrays, but the complete closed mindedness of such a senior executive at a 'pinacle' entity.
Every single point of view is that of a centralised perspective. Which I wasn't suprised at. At all.
I picked the book up to take a peak into the mind of what most people would see as a genius at work.
I think we put these people on pedestals like celebrities. FAANGs are only innovative in the art of oppression.
The best advice he gives to stop humanity from fucking it all up with AI, is to stop giving it bad prompts.
Bad prompts..
He makes sure we know it's not the developers who will fuck it up, but us -we must be better humans. (Comply or Die)
That's all great in a rose tinted view of the world. But it's not going to happen, it's plain to see.
And I felt violated (even more so than the tracking Google does when it abuses my data for it's own selfish interest) by the narrow minded approach towards, what he describes, as what might be the end of humanity, because AI is smarter than us dumb humans.
The advice involves, posting better on Facebook, LinkedIn, Instagram and the likes, so we can treat the algorithm better. Because we are teaching the AI with what we post.
I've got a better option. Delete all centralised algorithmic platforms that zap away your energy which you won't get back. Then you don't risk giving anything to these behemoths that might destroy us in the first place.
I think the book was written for the AI when they eventually take over, too. Like he wrote it as a form of social equity for the AI. At one point he names the AI 'Smartie', as if it's his child.
I'm perplexed.