Dan Goodman on Nostr: Idle musing, probably not very original, just thinking out loud. Obviously the ...
Idle musing, probably not very original, just thinking out loud. Obviously the "intelligence" of LLMs is very different to ours in that they find some stuff very easy that we find hard, and vice versa. In a way this isn't so surprising though. We typically find logical calculus harder than arithmetic even though in some formal sense it's much more basic. Our sense of what is easy and hard probably depends a lot on the order in which we learn things as children (things we learn at a young age seem easier), and probably on some innate structures that make some things easier to learn. I'm not entirely convinced we will see imminent breakthroughs in machine learning getting better at the things we find easy without pretty major conceptual leaps that I don't see any sign of yet. Scaling will probably make the gap less obvious because you can make up for one ability with a highly developed alternative ability, but as long as things like being able to balance parentheses remains hard for them (and the theoretical work above suggests it might), there's likely to be a gap. I think it would be interesting to use LLMs to refine our understanding of what it is exactly we're good at. It may also pay to concentrate on developing those abilities in ourselves.
Published at
2024-01-06 23:38:58Event JSON
{
"id": "652042443fa2986b391eaa53d690b174a937109e0c8e65a23cbc8d3b45d33f5c",
"pubkey": "0ecc4e999a6ece69b4a5c1de90118b96e1b0e8e863d5a5b489810557bdeb5d9f",
"created_at": 1704584338,
"kind": 1,
"tags": [
[
"e",
"08397d94ed413deb6312f09cf8e95695809fb16c8da734c47f3438222765d736",
"wss://relay.mostr.pub",
"reply"
],
[
"proxy",
"https://neuromatch.social/users/neuralreckoning/statuses/111711639215242792",
"activitypub"
]
],
"content": "Idle musing, probably not very original, just thinking out loud. Obviously the \"intelligence\" of LLMs is very different to ours in that they find some stuff very easy that we find hard, and vice versa. In a way this isn't so surprising though. We typically find logical calculus harder than arithmetic even though in some formal sense it's much more basic. Our sense of what is easy and hard probably depends a lot on the order in which we learn things as children (things we learn at a young age seem easier), and probably on some innate structures that make some things easier to learn. I'm not entirely convinced we will see imminent breakthroughs in machine learning getting better at the things we find easy without pretty major conceptual leaps that I don't see any sign of yet. Scaling will probably make the gap less obvious because you can make up for one ability with a highly developed alternative ability, but as long as things like being able to balance parentheses remains hard for them (and the theoretical work above suggests it might), there's likely to be a gap. I think it would be interesting to use LLMs to refine our understanding of what it is exactly we're good at. It may also pay to concentrate on developing those abilities in ourselves.",
"sig": "3d21fcf0877f9cde61ed7b43067ad30ad70eb7479af23fc8139d5debfd86e51d463fdf42d40acd6cc34acf7e2f8c53d85680b657b5f9fbaac6cd454be2c1dea5"
}