What is Nostr?
NSmolenskiFan
npub1qqq…354d
2024-11-28 07:14:40

NSmolenskiFan on Nostr: No, going along with the narrative that AI companies care about “safety” has ...

No, going along with the narrative that AI companies care about “safety” has nothing to do with gullible, childlike “neurodivergence” and everything to do with the ability to think and be honest with oneself.

Think: What are this company’s incentives? What do its actions over time show that it is consistently solving for?

Think: Has anyone at this company actually defined “safety?” Technically? Legally? Interpersonally?

Think: What are my incentives? What do I *really* want? What am I personally solving for that this job is attractive to me?

Just asking those questions should be enough to disabuse anyone of the convenient stories that 1) the companies they work for prioritize “AI safety” (whatever that is) and 2) their own illusions that they are “pure” moral actors navigating a fallen world. Both the companies and their employees are making rational decisions based on their own values and priorities.

We need to stop giving people outs for their own moral responsibility—whether that is due to neurodivergence, mental illness, or anything else. These are full-grown adults making real decisions that affect their own lives and the lives of others.

Choosing to work for a company building AI tools is neither essentially good nor bad. Just be honest about what the company is doing and the ways you have chosen to participate—or not.
Author Public Key
npub1qqqqqqx2crupn0c6pfsv3y0wkxfe97v0n82gpy95l6sv55fuazlqfy354d