☃️merry chrimist☃️ on Nostr: nemesis :blackhole: The question on my mind is how do we know that "all of math" ...
nemesis :blackhole: (npub10zr…yg26) The question on my mind is how do we know that "all of math" won't turn out to be a narrow problem in the same sense? Naively, it seems like the main question is whether the higher-level abstractions which guide mathematical thought can be implicitly encoded in some general-purpose neural architecture or whether the ML system needs to be able to encode them explicitly. Maybe every neural architecture has a finite capacity to store higher-level abstractions.
(To be explicit, the higher-level abstractions in geometry would be concepts like orthocenter or nine-point circle, etc. which allow you to reason about the problem in a compressed state. From their paper, it sounds like some of these abstractions were hard-coded (like "construct an excircle"), but I can believe that the LLM was also learning its own abstractions in some unintelligible form.)
(To be explicit, the higher-level abstractions in geometry would be concepts like orthocenter or nine-point circle, etc. which allow you to reason about the problem in a compressed state. From their paper, it sounds like some of these abstractions were hard-coded (like "construct an excircle"), but I can believe that the LLM was also learning its own abstractions in some unintelligible form.)