Vitor Pamplona on Nostr: Everytime I ask an AI to make a statement "better", without further instructions, the ...
Everytime I ask an AI to make a statement "better", without further instructions, the result is often a weaker, less precise, more ambiguous, fuzzier version.
It begs the question of why. What is making the model think fuzzier is "better"? Is it because most texts it was trained on were imprecise and fuzzy? Or is it because it is trying to "average" words to the most common denominator?
GM.
It begs the question of why. What is making the model think fuzzier is "better"? Is it because most texts it was trained on were imprecise and fuzzy? Or is it because it is trying to "average" words to the most common denominator?
GM.