Fabio Manganiello on Nostr: npub1d4fnv…rvux9 npub1yhj4u…h3h3t the adoption of groundbreaking technologies, ...
npub1d4fnv3g3dj9834w3mvszgp55qgl28nxjdw5hnzhcywek9myyn6aqzrvux9 (npub1d4f…vux9) npub1yhj4ua6580h3w0kucwwrtwl4uv9pg7m2drxzplq5x3364jczp36s4h3h3t (npub1yhj…3h3t) the adoption of groundbreaking technologies, from the steam engine, to the sewing machine, to the Internet itself, has always been accompained by loss of jobs and creation of new jobs, as well as augmentation of existing jobs.
Just recently I’ve been reading about the Luddite movement, and how in the early 19th century they protested against the adoption of sewing machines amid fears of job losses (today very few could imagine a world where all of our clothes and curtains are sewed by hand).
Technological tools aren’t political. They don’t cause job losses or societal changes by themselves. They are just tools. The existence of a hammer isn’t always a good thing, because it could be used to smash somebody’s skull, neither it’s always a bad thing, because it can be used to build things that wouldn’t be possible without it.
Similarly, ML models are used a lot in pure and applied science, and they are already changing the world for good. Our understanding of protein folding, genetic mutations, climate models, graph theory etc. wouldn’t be where it sits today if it weren’t for big models trained on a lot of protein shapes, genomic databases, raw climate data or proofs of existing theorems. Do we want to create a system that financially punishes those use-cases as well, just like we would put a tax on the sales of all the hammers in order to pay back the damages to the families of those who got their heads smashed by them?
I’m not saying that we shouldn’t regulate AI. Nor that it’s something easy to do. But the shout-at-the-running-train approach won’t take us far. We need to first analyze what we consider fair usage and what we consider abuse (risk of job loss? then place a robot tax proportional to the risk of the specific application. waste of computing resources? then place a tax on excessive consumption), and then create a system of financial carrots-and-sticks to maximize the benefits and minimize the risks.
Just recently I’ve been reading about the Luddite movement, and how in the early 19th century they protested against the adoption of sewing machines amid fears of job losses (today very few could imagine a world where all of our clothes and curtains are sewed by hand).
Technological tools aren’t political. They don’t cause job losses or societal changes by themselves. They are just tools. The existence of a hammer isn’t always a good thing, because it could be used to smash somebody’s skull, neither it’s always a bad thing, because it can be used to build things that wouldn’t be possible without it.
Similarly, ML models are used a lot in pure and applied science, and they are already changing the world for good. Our understanding of protein folding, genetic mutations, climate models, graph theory etc. wouldn’t be where it sits today if it weren’t for big models trained on a lot of protein shapes, genomic databases, raw climate data or proofs of existing theorems. Do we want to create a system that financially punishes those use-cases as well, just like we would put a tax on the sales of all the hammers in order to pay back the damages to the families of those who got their heads smashed by them?
I’m not saying that we shouldn’t regulate AI. Nor that it’s something easy to do. But the shout-at-the-running-train approach won’t take us far. We need to first analyze what we consider fair usage and what we consider abuse (risk of job loss? then place a robot tax proportional to the risk of the specific application. waste of computing resources? then place a tax on excessive consumption), and then create a system of financial carrots-and-sticks to maximize the benefits and minimize the risks.