What is Nostr?
iefan 🕊️
npub1cmm…lr6f
2025-02-23 13:34:19
in reply to nevent1q…x6va

iefan 🕊️ on Nostr: Typically, setting something like this up involves building your own RAG system, a ...

Typically, setting something like this up involves building your own RAG system, a database and powerful servers, essentially retraining the model.

But with this massive 2 million token context window, my implementation skips all that. You essentially fine-tune the model by talking to it. This conversation can include text, media, PDFs, and instructions, you can upload all this data & instructions in bulk, & keep modifying it.

Once the model acts the way you want and has the knowledge you need, you just save that state as a personality, it saves it locally.

Then, the next time you open it, you start right where you left off. And you can reuse that state whenever you want, as many times as you want.

This approach is much more accessible. Building and maintaining a RAG system isn't easy. Also, the model much more readily understands the context in this method compared to traditional RAG. The reason we don't use context more widely for model training is that we just haven't had enough context window. Only Google's modals has it at this scale, for now.
Author Public Key
npub1cmmswlckn82se7f2jeftl6ll4szlc6zzh8hrjyyfm9vm3t2afr7svqlr6f