Stephen Brooks 🦆 on Nostr: The hope seemed to be that LLMs would spontaneously do "reverse programming", ...
The hope seemed to be that LLMs would spontaneously do "reverse programming", deducing a logical algorithm from data examples. They do seem to do this for simple tasks but it's actually uncomputably hard in the general case. LLMs' reliability seems to suffer on harder tasks.
Published at
2024-11-10 18:56:46Event JSON
{
"id": "cf4fa767421d58a182764ac3f2a26d98fec66f7d06c4613227228ca66cac4f0e",
"pubkey": "e0baa8ebcaeed55330a87a40682c68d13c3a914775b416071eddee59f74b962c",
"created_at": 1731265006,
"kind": 1,
"tags": [
[
"proxy",
"https://mstdn.io/users/sjb/statuses/113460183470776924",
"activitypub"
]
],
"content": "The hope seemed to be that LLMs would spontaneously do \"reverse programming\", deducing a logical algorithm from data examples. They do seem to do this for simple tasks but it's actually uncomputably hard in the general case. LLMs' reliability seems to suffer on harder tasks.",
"sig": "59f2b7559ca9d6bfc52d981e80d3b2092bf680c9786b6dbed813a3aa077c0be5db6f91439c80b5288238a886c22a2037d27561e0f34b7a8c9c4067d89591630e"
}