Miranda :flag_transgender:🐙 on Nostr: Exactly, it's basically what's known as RAG, retrieval augmented generation. ...
Exactly, it's basically what's known as RAG, retrieval augmented generation. Retrieved (searched) informaiton is used as context for the LLM inference, it has a similar role to the question itself. It doesn't need the LLM to be trained on the data in question.
Another quite impressive use of this technology is : https://www.perplexity.ai/ (I've no other opinion regarding it, I do not endorse it in any way.)
It does not address the myriad other issues posed by training and using LLMs, and there are a lot of ways of doing it wrong, but when it comes to usages, it can in my opinion be one of the less foolish uses of LLMs. As long as you actually check the sources.
(Also, this absolutely does *not* mean that OpenAI isn't savagely scraping the fediverse without any regard for its users, for purposes unknown. There is solid evidence it is doing just that, and it's a predatory, evil company from which nothing less must be expected.)
Another quite impressive use of this technology is : https://www.perplexity.ai/ (I've no other opinion regarding it, I do not endorse it in any way.)
It does not address the myriad other issues posed by training and using LLMs, and there are a lot of ways of doing it wrong, but when it comes to usages, it can in my opinion be one of the less foolish uses of LLMs. As long as you actually check the sources.
(Also, this absolutely does *not* mean that OpenAI isn't savagely scraping the fediverse without any regard for its users, for purposes unknown. There is solid evidence it is doing just that, and it's a predatory, evil company from which nothing less must be expected.)