Thomas ðŸ”✨:verified: on Nostr: There's this unproven assumption that "we just need to train it more" is the solution ...
There's this unproven assumption that "we just need to train it more" is the solution to generative AI lying and being generally mid.
What really happens is that with more data, it will approach absolute midness.
If they (as they plan) try to feed to generated-by-AI text to "learn" it will even regress (actually a good Bible quote for this, "as a dog returns to his vomit, so a fool returns to his folly").
What really happens is that with more data, it will approach absolute midness.
If they (as they plan) try to feed to generated-by-AI text to "learn" it will even regress (actually a good Bible quote for this, "as a dog returns to his vomit, so a fool returns to his folly").