Cory Doctorow on Nostr: If we stop writing the web, AIs will have to summarize each other, forming an inhuman ...
If we stop writing the web, AIs will have to summarize each other, forming an inhuman centipede of botshit-ingestion. This is bad news, because there's pretty solid mathematical evidence that training a bot on botshit makes it *absolutely useless*. Or, as the authors of the paper - including the eminent cryptographer Ross Anderson - put it, "using model-generated content in training causes irreversible defects":
https://arxiv.org/abs/2305.174936/
Published at
2024-02-23 15:12:17Event JSON
{
"id": "f53b4a02f73ced1d18e5d721f7d95e071e36e0641e3b08ceb7ae994125b2a4db",
"pubkey": "21856daf84c2e4e505290eb25e3083b0545b8c03ea97b89831117cff09fadf0d",
"created_at": 1708701137,
"kind": 1,
"tags": [
[
"e",
"6324044b2e3ec53da732d05cc9586bb3a9117bd8a08e7a2b6f82a9afd297dfc7",
"wss://relay.mostr.pub",
"reply"
],
[
"content-warning",
"Long thread/6"
],
[
"proxy",
"https://mamot.fr/users/pluralistic/statuses/111981437748753432",
"activitypub"
]
],
"content": "If we stop writing the web, AIs will have to summarize each other, forming an inhuman centipede of botshit-ingestion. This is bad news, because there's pretty solid mathematical evidence that training a bot on botshit makes it *absolutely useless*. Or, as the authors of the paper - including the eminent cryptographer Ross Anderson - put it, \"using model-generated content in training causes irreversible defects\":\n\nhttps://arxiv.org/abs/2305.17493\n\n6/",
"sig": "a701d5ca09f11b96994f5b64ddc5399694a98381d2e2e24bbcabe690fb873536dc1dafd812e82194a7d1b7719bcf6586035824f40cf570fe540c001d7ebad004"
}