Semisol 👨💻 on Nostr: Surprised that reasoning models didn’t exist earlier yet. LLMs are glorified data ...
Surprised that reasoning models didn’t exist earlier yet. LLMs are glorified data processing pipelines, and you can only fit so much processing in a set of layers (I would be willing to bet most of them internally converge to the following architecture: decode => process => lookup-like mapping => process => encode)
Reasoning (with scratch space) compared to without is like combinational logic compared to sequential.
Published at
2025-02-12 18:11:46Event JSON
{
"id": "b794504fe3523284d436ac79c169107ceaa25c8bbcbd380cbc051501b6e7dfa0",
"pubkey": "52b4a076bcbbbdc3a1aefa3735816cf74993b1b8db202b01c883c58be7fad8bd",
"created_at": 1739383906,
"kind": 1,
"tags": [
[
"e",
"4b620a2721bd78a2c7232a2f4ab180fcae2e64c65ea461df7c70f5a8b5a0bfd0",
"",
"root"
],
[
"p",
"46fcbe3065eaf1ae7811465924e48923363ff3f526bd6f73d7c184b16bd8ce4d"
]
],
"content": "Surprised that reasoning models didn’t exist earlier yet. LLMs are glorified data processing pipelines, and you can only fit so much processing in a set of layers (I would be willing to bet most of them internally converge to the following architecture: decode =\u003e process =\u003e lookup-like mapping =\u003e process =\u003e encode)\n\nReasoning (with scratch space) compared to without is like combinational logic compared to sequential.",
"sig": "09196e088cfd84bdd0c44180807a17df17786dd3acb7d7ccb744baf3fa461916383615eb4b86bb2d5e9b1ffdfb7f56fde910ff8e1ba07d6ae36fee7072611d43"
}