Kathy Reid on Nostr: A group of prominent #AI and #ML scientists signed a very simple statement on giving ...
A group of prominent #AI and #ML scientists signed a very simple statement on giving the possibilities of global catastrophe caused by AI more prominence.
https://www.safe.ai/statement-on-ai-risk
This is part of a broader movement of #AISafety or #AIRisk. I don't disagree with everything this movement has to say; there are real, and tangible consequences to unfettered development of AI systems.
But the focus of this work is on possible futures. Right now, currently, there are people who experience discrimination, poorer outcomes, impeded life chances, and real, material harms because of the technologies we have in place *right now*.
And I wonder if this focus on possible futures is because the people warning about them *don't* feel the real and material harms #AI already causes? Because they're predominantly male-identifying. Or white. Or socio-economically advantaged. Or well educated. Or articulate. Or powerful. Or intersectionally, many of these qualities.
It's hard to worry about a possible future when you're living a life of a thousand machine learning-triggered paper cuts in the one that exists already.
https://www.safe.ai/statement-on-ai-risk
This is part of a broader movement of #AISafety or #AIRisk. I don't disagree with everything this movement has to say; there are real, and tangible consequences to unfettered development of AI systems.
But the focus of this work is on possible futures. Right now, currently, there are people who experience discrimination, poorer outcomes, impeded life chances, and real, material harms because of the technologies we have in place *right now*.
And I wonder if this focus on possible futures is because the people warning about them *don't* feel the real and material harms #AI already causes? Because they're predominantly male-identifying. Or white. Or socio-economically advantaged. Or well educated. Or articulate. Or powerful. Or intersectionally, many of these qualities.
It's hard to worry about a possible future when you're living a life of a thousand machine learning-triggered paper cuts in the one that exists already.