Carl T. Bergstrom on Nostr: So just as Google has strong incentives to keep users onsite using shitty large ...
So just as Google has strong incentives to keep users onsite using shitty large language models that hallucinate, it has strong incentives to keep users onsite by making even relatively unreliable guesses about what constitutes a definitive answer to a search query.
Published at
2023-10-08 06:11:07Event JSON
{
"id": "e6778f34785c2cc68e13091677176b32266cf862eea04393ee89f45d9ae7fb33",
"pubkey": "ac5404833d7aff6cebf4afd632d367ccf064470e9ad728b47bce8be6bfbb958a",
"created_at": 1696745467,
"kind": 1,
"tags": [
[
"e",
"311b8f6bfaf9b76128df967a335b714bc8b503484614b7f7919ffef255dfc643",
"wss://relay.mostr.pub",
"reply"
],
[
"proxy",
"https://fediscience.org/users/ct_bergstrom/statuses/111197910941698161",
"activitypub"
]
],
"content": "So just as Google has strong incentives to keep users onsite using shitty large language models that hallucinate, it has strong incentives to keep users onsite by making even relatively unreliable guesses about what constitutes a definitive answer to a search query.",
"sig": "21a9a0b79edb04e0da5ddb4b0fbfe516f51c18f05c00e4df89cb48051511338adda18cd95709844cdb579e663046ffbfc482fece8b6c8e51afdb8d9e8cef5032"
}