lucash.dev on Nostr: “Mitigating the risk of extinction from AI should be a global priority alongside ...
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war,”
Starting to think that this whole evil AI thing is mostly another attempt at justifying globalism based on made up existential threats.
Yes AI can be dangerous— though it’s mostly because humans are dumb than bc machines are intelligent.
The more I hear “existential threat” being thrown around the more I think there actually isn’t any existential threat to mankind.
Not AI. Not viruses. No nuclear war. Not even evil totalitarians.
None of that has any chance of ending human life — much less life in general.
Just like people claiming greater good are usually advocating for something evil — those claiming “existential threats to mankind” are just trying to get away with threatening *you* without sounding evil.
Starting to think that this whole evil AI thing is mostly another attempt at justifying globalism based on made up existential threats.
Yes AI can be dangerous— though it’s mostly because humans are dumb than bc machines are intelligent.
The more I hear “existential threat” being thrown around the more I think there actually isn’t any existential threat to mankind.
Not AI. Not viruses. No nuclear war. Not even evil totalitarians.
None of that has any chance of ending human life — much less life in general.
Just like people claiming greater good are usually advocating for something evil — those claiming “existential threats to mankind” are just trying to get away with threatening *you* without sounding evil.