etwas on Nostr: I don't know what you expect; but this doesn't seem surprising. These LLMs simply ...
I don't know what you expect; but this doesn't seem surprising.
These LLMs simply digest and regurgitate "likely" word patterns. If you feed it "data" (nostr notes, in this case) from any group with a bias, you're going to get the boiled down version -- a summary, if you will -- of those biases.
Given that nostr notes are generated by people who generally have a higher distrust of "The Narrative" as presented by governments, main-stream media, etc., you're going to see that reflected in the output of an LLM trained with that data.
The mere fact that many of LLM's responses to the faith-related questions start with "I believe ..." is enough to make me question the validity of the model as a source of unbiased output. And I'm fully in the "There is a God" and "God has a plan for us" camp.
These LLMs simply digest and regurgitate "likely" word patterns. If you feed it "data" (nostr notes, in this case) from any group with a bias, you're going to get the boiled down version -- a summary, if you will -- of those biases.
Given that nostr notes are generated by people who generally have a higher distrust of "The Narrative" as presented by governments, main-stream media, etc., you're going to see that reflected in the output of an LLM trained with that data.
The mere fact that many of LLM's responses to the faith-related questions start with "I believe ..." is enough to make me question the validity of the model as a source of unbiased output. And I'm fully in the "There is a God" and "God has a plan for us" camp.