Redish Lab on Nostr: nprofile1q…hcvwk 1/2 I don't hear a lot of neuroscientists and cognitive scientists ...
nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpq9d9p04u4xfysdy92fycw947jrca3xve2gnsauysshzewxvmz8dmsxhcvwk (nprofile…cvwk)
1/2
I don't hear a lot of neuroscientists and cognitive scientists arguing for anti-representation behaviorism in the sciences. I do hear AI tech people arguing that intelligence is defined behaviorally, in the sense that if something acts intelligently, we should call it intelligent (which I don't necessarily disagree with). * But those AI tech people are not scientists. I think the vast majority of neuroscientists agree in the idea that one can open up systems and find representations.**
* I think the best definition of intelligence is using the information in the world to achieve agentic goals.*** [c.v. Kevin Mitchell's excellent book "Free Agents"].
** Subsymbolic representations are still representations. All that "manifold" and "dimension manipulation" stuff is definitely representations, just in a different mathematical language. Manifold analyses definitely sit on the cognitive side of the cognitive / behaviorist divide.
** I don't think LLMs have agentic goals, but I could easily imagine a system with an agentic goal (such as making paperclips) using LLMs to achieve those goals.
1/2
I don't hear a lot of neuroscientists and cognitive scientists arguing for anti-representation behaviorism in the sciences. I do hear AI tech people arguing that intelligence is defined behaviorally, in the sense that if something acts intelligently, we should call it intelligent (which I don't necessarily disagree with). * But those AI tech people are not scientists. I think the vast majority of neuroscientists agree in the idea that one can open up systems and find representations.**
* I think the best definition of intelligence is using the information in the world to achieve agentic goals.*** [c.v. Kevin Mitchell's excellent book "Free Agents"].
** Subsymbolic representations are still representations. All that "manifold" and "dimension manipulation" stuff is definitely representations, just in a different mathematical language. Manifold analyses definitely sit on the cognitive side of the cognitive / behaviorist divide.
** I don't think LLMs have agentic goals, but I could easily imagine a system with an agentic goal (such as making paperclips) using LLMs to achieve those goals.