soothspider on Nostr: People forget that AI's (e.g. LLMs) are prediction models. They predict the output ...
People forget that AI's (e.g. LLMs) are prediction models. They predict the output that you want based on the inputs it has received. This is a reflection of the "training" it has gotten, which is basically a fancy word for a weighted statistical model that it uses to generate output.
So the fact that this output "randomly" showed up means that it was literally "trained" w/ this kind of output. Otherwise imagine the statistical impossibility of it. As more people use and trust LLMs due to their "convenience", the more they want these LLMs to control you. If social media isn't doing it anymore, they absolutely want to make sure you're listening to LLMs that are online and free (so they can also mine your questions, responses, follow-up questions, actions afterwards...).
Of course the more important group to target are the youth. They absolutely want to make sure that the next generation cannot think for themselves. They want these LLMs easy to use and hard for teachers to catch (not that many of them are even smart enough to do so). They want to make sure the youth cannot form an opinion for themselves through the very hard exercise of thinking, risking being wrong and then being challenged on it (which is necessary for growing, thinking and maturing). By using LLMs kids will be "safe" because the LLMs are "mostly right". They will never need to risk forming their own opinion and therefore will never develop the skills or more importantly the instinct to do so.
LLMs are not inherently dangerous, just that our BELIEF in their infallibility and overuse of them makes them dangerous. This is especially true for the youth who don't know any better. Who are more concerned about immediate and easy gains. Who will never understand the journey of development required to produce real skills for the real world. (This reminds me of HS where some "Top A" students couldn't solve rudimentary real world problems with resources at hand; can't even ballpark calculations to see if their ideas are even feasible... – and that was back a few decades ago.)
https://www.tomshardware.com/tech-industry/artificial-intelligence/gemini-ai-tells-the-user-to-die-the-answer-appears-out-of-nowhere-as-the-user-was-asking-geminis-help-with-his-homework
(Btw, reading the comments on this article, it's CLEARLY evident no one even bothered to open the linked chat log.)
Here's the full chat: https://gemini.google.com/share/6d141b742a13
So the fact that this output "randomly" showed up means that it was literally "trained" w/ this kind of output. Otherwise imagine the statistical impossibility of it. As more people use and trust LLMs due to their "convenience", the more they want these LLMs to control you. If social media isn't doing it anymore, they absolutely want to make sure you're listening to LLMs that are online and free (so they can also mine your questions, responses, follow-up questions, actions afterwards...).
Of course the more important group to target are the youth. They absolutely want to make sure that the next generation cannot think for themselves. They want these LLMs easy to use and hard for teachers to catch (not that many of them are even smart enough to do so). They want to make sure the youth cannot form an opinion for themselves through the very hard exercise of thinking, risking being wrong and then being challenged on it (which is necessary for growing, thinking and maturing). By using LLMs kids will be "safe" because the LLMs are "mostly right". They will never need to risk forming their own opinion and therefore will never develop the skills or more importantly the instinct to do so.
LLMs are not inherently dangerous, just that our BELIEF in their infallibility and overuse of them makes them dangerous. This is especially true for the youth who don't know any better. Who are more concerned about immediate and easy gains. Who will never understand the journey of development required to produce real skills for the real world. (This reminds me of HS where some "Top A" students couldn't solve rudimentary real world problems with resources at hand; can't even ballpark calculations to see if their ideas are even feasible... – and that was back a few decades ago.)
https://www.tomshardware.com/tech-industry/artificial-intelligence/gemini-ai-tells-the-user-to-die-the-answer-appears-out-of-nowhere-as-the-user-was-asking-geminis-help-with-his-homework
(Btw, reading the comments on this article, it's CLEARLY evident no one even bothered to open the linked chat log.)
Here's the full chat: https://gemini.google.com/share/6d141b742a13