vruz on Nostr: npub13f7rg…njva0 It's possible that an hypotetical LLM could be trained to generate ...
npub13f7rg67rqnarjwrx58ey3wp04a6xjc5jcqrxw9r0ks3r56l3jlxq8njva0 (npub13f7…jva0)
It's possible that an hypotetical LLM could be trained to generate text using the most likely next word, and the most unlikely next word, or the second best next word. So you would get a statistically measurable number of most likely words, and a statistically measurable number of least likely words every N words. If you get the expected words in the text, then it's a binary 1, if you get the least expected words, then it's a 0.
It's possible that an hypotetical LLM could be trained to generate text using the most likely next word, and the most unlikely next word, or the second best next word. So you would get a statistically measurable number of most likely words, and a statistically measurable number of least likely words every N words. If you get the expected words in the text, then it's a binary 1, if you get the least expected words, then it's a 0.