mark tyler on Nostr: Yes, though, to be sure people aren’t really truth telling machines either unless ...
Yes, though, to be sure people aren’t really truth telling machines either unless you give them some sort of mechanism to test hypotheses aka a science lab. If science labs can have APIs, and a plug-in is made, then suddenly GPT-x might become a truth telling machine.
To further agree with what you’re saying, it’s frustrating that GPT for doesn’t have much of a opinion aligner. You could start talking to it in a certain way, and get it to output, certain stuff that it wouldn’t otherwise output. Humans are more self consistent. I think Auto GPT though might fix that in a way that isn’t actually too different than human thought. But I haven’t been able to play with it myself yet
To further agree with what you’re saying, it’s frustrating that GPT for doesn’t have much of a opinion aligner. You could start talking to it in a certain way, and get it to output, certain stuff that it wouldn’t otherwise output. Humans are more self consistent. I think Auto GPT though might fix that in a way that isn’t actually too different than human thought. But I haven’t been able to play with it myself yet