What is Nostr?
TheGuySwann / Guy Swann
npub1h8n…rpev
2024-09-03 15:10:07
in reply to nevent1q…ummu

TheGuySwann on Nostr: Yes and no, actually. It's less about getting "more intelligent source material," and ...

Yes and no, actually. It's less about getting "more intelligent source material," and more about extracting greater amounts of intelligence/knowledge from what it is given. Right now if you feed a math book into an LLM, it wont be able to do math after you fine tune it. It's actually horrifically stupid in the sense of extracting specific knowledge. It will be many layers of algorithmic improvements that begin to enable what some are calling "deep thinking" on a single pieve of material, such that you can train it on a math book and it will actually extract and self-evaluate all of the lessons in the book until it legit actually can DO math because you gave it a math book. And I don't think that's a stretch either, its just a few layers up from what we are doing now.

I actually cover it on the show in a few episodes because I thought this was a fundamental limitation too, but it actually isn't as serious of a limitation as it seems, simply because of how LLMs scale with *compute*.

In other words, a good LLM can, in fact, train a better one. (there's more to it than that but too long for a nostr note)

However, you are correct in the sense that there is only so much knowledge (or even correct information) to draw from any particular piece of information or material. And it begs the question of understanding whether the information is correct or not. If its "super intelligent" about shit that's simply wrong, then it is meaningless.

Author Public Key
npub1h8nk2346qezka5cpm8jjh3yl5j88pf4ly2ptu7s6uu55wcfqy0wq36rpev