Matt Wilcox on Nostr: There are _no good use cases_ for AI summarisation if you care about having a correct ...
There are _no good use cases_ for AI summarisation if you care about having a correct summary.
It can't be trusted to summarise correctly because a fundamental property of LLMs is _they do not understand anything_. They just parrot likely next words, but that "likely" is not based only on the content it's "summarising".
I have _no_ interest in any of this tech in my OS, or apps. It's a trojan horse for lies, built on theft, training us to trust and not think.
https://mastodon.macstories.net/@johnvoorhees/112887004822171579
It can't be trusted to summarise correctly because a fundamental property of LLMs is _they do not understand anything_. They just parrot likely next words, but that "likely" is not based only on the content it's "summarising".
I have _no_ interest in any of this tech in my OS, or apps. It's a trojan horse for lies, built on theft, training us to trust and not think.
https://mastodon.macstories.net/@johnvoorhees/112887004822171579