Druid on Nostr: Last night my friend who helped develop Claude told me about "solving the alignment ...
Last night my friend who helped develop Claude told me about "solving the alignment problem" in AGI development and the "pivotal act."
Apparently, superhuman AGI intelligence is considered so dangerous that anyone who develops it is supposed to have a responsibility to use it to destroy the ability of anyone else to ever build one. A widely considered example of this is USING IT TO BUILD A SWARM OF NANOBOTS THAT DESTROYS EVERY GPU ON EARTH EXCEPT FOR ITS OWN HARDWARE.
THIS is considered the BEST case scenario. Oh, and this is thought to be about 2 years away.
Apparently, superhuman AGI intelligence is considered so dangerous that anyone who develops it is supposed to have a responsibility to use it to destroy the ability of anyone else to ever build one. A widely considered example of this is USING IT TO BUILD A SWARM OF NANOBOTS THAT DESTROYS EVERY GPU ON EARTH EXCEPT FOR ITS OWN HARDWARE.
THIS is considered the BEST case scenario. Oh, and this is thought to be about 2 years away.