Stefan Eissing on Nostr: nprofile1q…qdehg As I understand it, the LLM basically is a statistical analysis ...
nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpq5emazsu8jz9xn6wfvq9dpejfcjs0n2urvgm9n2h9h9t34apa9ngqvqdehg (nprofile…dehg) As I understand it, the LLM basically is a statistical analysis of, given the context window, to determine "what comes next".
It adds whatever it determined to the context window and generates the next output. Adds that, and so on and so on. Until the generated token is "stop".
So, while reading a large input, its context window will overflow and the tossed out things no longer take part in the statistics for the next token.
Simplified. oc.
It adds whatever it determined to the context window and generates the next output. Adds that, and so on and so on. Until the generated token is "stop".
So, while reading a large input, its context window will overflow and the tossed out things no longer take part in the statistics for the next token.
Simplified. oc.