mark tyler on Nostr: I agree with the last half. It’s not clear to me that we aren’t closer to LLMs ...
I agree with the last half. It’s not clear to me that we aren’t closer to LLMs than many realize, though we’re definitely more like RNNs in some ways. Give it a plug-in to apply a sort of RNN mask to its plug-in memory memory, and that might be enough I think. Definitely not GPT3.5, but give it 5 years and I think they are easily good enough to replace most computer workers.
And totally, we need to figure out how to continue producing high-quality data. Currently we train agents (people) and the number of resources that they can accumulate becomes correlated to their “model” performance. As we see these resources accumulate to them, we become more likely to use them as the definition of good information. The same may be true of future AI agents and additional quality information generation.
And totally, we need to figure out how to continue producing high-quality data. Currently we train agents (people) and the number of resources that they can accumulate becomes correlated to their “model” performance. As we see these resources accumulate to them, we become more likely to use them as the definition of good information. The same may be true of future AI agents and additional quality information generation.