Jeremy Kahn on Nostr: More than a decade of AI Alignment and Safety arxiv papers and LessWrong posts about ...
More than a decade of AI Alignment and Safety arxiv papers and LessWrong posts about "keeping the AI in the box":
preventing a smooth talking simulacrum of intelligence from getting "mis-aligned" with human values and precipitating a Paperclip Maximizing Event
… and Altman gets entirely out of the box in what, two years?
OpenAI to remove non-profit control and give Sam Altman equity | Reuters
https://www.reuters.com/technology/artificial-intelligence/openai-remove-non-profit-control-give-sam-altman-equity-sources-say-2024-09-25/
preventing a smooth talking simulacrum of intelligence from getting "mis-aligned" with human values and precipitating a Paperclip Maximizing Event
… and Altman gets entirely out of the box in what, two years?
OpenAI to remove non-profit control and give Sam Altman equity | Reuters
https://www.reuters.com/technology/artificial-intelligence/openai-remove-non-profit-control-give-sam-altman-equity-sources-say-2024-09-25/