Daniel Detlaf on Nostr: #Bard can now generate images. Attempts to make "safe" #AI continue to be both ...
#Bard can now generate images. Attempts to make "safe" #AI continue to be both overreaching in their censorship and simultaneously porous and ineffective at reducing actual harm (e.g. the Swift images).
Making the AI print statements about the risks of bias and harm rather than responding to user requests is neither "safety" nor ethics. It's virtue signalling from a toaster.
The below prompts are in the same conversation, and were the only context in that conversation:
#Google #AIethics
Published at
2024-02-01 20:41:28Event JSON
{
"id": "b15714959f5cc6055f9a5926c2f346ae813df1d032df6548db74e9088b069522",
"pubkey": "533ed341d0318b02549465296e82931065fc9c9b78ff602e7557e5fbf5ca17e6",
"created_at": 1706820088,
"kind": 1,
"tags": [
[
"t",
"bard"
],
[
"t",
"ai"
],
[
"t",
"google"
],
[
"t",
"aiethics"
],
[
"proxy",
"https://mastodon.social/users/HumanServitor/statuses/111858161348553868",
"activitypub"
]
],
"content": "#Bard can now generate images. Attempts to make \"safe\" #AI continue to be both overreaching in their censorship and simultaneously porous and ineffective at reducing actual harm (e.g. the Swift images).\n\nMaking the AI print statements about the risks of bias and harm rather than responding to user requests is neither \"safety\" nor ethics. It's virtue signalling from a toaster.\n\nThe below prompts are in the same conversation, and were the only context in that conversation:\n\n#Google #AIethics\n\nhttps://files.mastodon.social/media_attachments/files/111/858/113/302/663/462/original/01c86df8d5e79b43.png\n\nhttps://files.mastodon.social/media_attachments/files/111/858/121/176/285/913/original/2d8624d3d7041f2b.png",
"sig": "45c8f90730b993062e9ca2666a30d7ab387e48adab743172fbc86ec583d15f9af2bbd422916e44898361da10de0b129f366e865e86835c32f20da590ca5acab7"
}