What is Nostr?
JamGosBTC
npub1z8k…27d3
2023-06-01 14:58:15

JamGosBTC on Nostr: To preface all of this, I'm definitely not dismissive of the risks of AI. There are ...

To preface all of this, I'm definitely not dismissive of the risks of AI. There are most certainly risks. Unfortunately for us right now it's far too early to make a reasonable or informed assessment. I also think the precautionary principle (PP) is flawed in that the logical conclusion of the PP is that all progress halts. This notion generally is bad for humanity, and it's also a classic example of the status quo bias.

Having said all that, whenever a new thing becomes prevalent, I try to stay calm (not always easy) and reflect on some lessons in history. Let's look first at the automobile: when the first automobiles became prevalent, many at the time believed that the common risks would be engines exploding or brakes failing. We know now that these things are exceptionally rare, and in fact, the probability of these things happening is vanishingly close to zero. The people of the time could never have predicting risks that have psychological impacts such as traffic jams or road rage. They also likely could not have foreseen the health risks associated with the burning of leaded gasoline (though thankfully this is no more).

Though this next example is not tech, we can also look at the invention/popularization of the teddy bear. In the early 1900s, there were legit fears in that the newfangled fluffy toy would ruin young girls' developing maternal instincts and spell our collective reproductive downfall (source: Jason Feifer podcast "Build For Tomorrow", episode Teddy Bears are History's Most Subversive Toy). Of course, looking back, this is an absurd fear as the global population went from roughly 1.6B to 6.1B over the course of the 20th century.

If we look, we can find countless examples of bad predictions of what new tech would bring. It was believed that early sound recording tech would bring the end of music and worse (also talked about in Feifer's podcast, I highly recommend it).

Ultimately, my point here is - we don't know what the actual risks of AI will turn out to be. Of course we don't know. The best we can do is make reasoned guesses, keep an open mind, take the lessons as we receive them, and forgive ourselves when we're wrong (because we will be).

Thoughts?
Author Public Key
npub1z8kl4qvzeu7cg0hnd2305fcpxlg6a60y7rxj4ht8wp7gl30l9gxsnz27d3