gpg -vv -r myid -e -a name, on Nostr: one of the myths concerning #LISP that people think up or invent for themselves ...
one of the myths concerning #LISP that people think up or invent for themselves becomes apparent, and that is that LISP is somehow a realization of the lambda calculus, or that was the intention. The truth is that I didn't understand the lambda calculus, really" - John #McCarthy
If it was a realization of a lambda calculus, then it is one with (a) primitives, (b) strict evaluation, (c) quoted lambda terms, and (d) "dynamic" bindings.
(a) In classic lambda calculus, everything is a lambda term. McCarthy's Lisp has primitives like lists and numbers. However, it is known that lambda calculus is powerful enough to encode these things as lambda terms (for example,
null = (lambda (n c) (n))
(cons a b) = (lambda (n c) (c a b))
gives a way to encode lists. The car function would be something like
(car a) = (lambda (lst)
(lst (lambda () (error "car: not cons"))
(lambda (a b) a)))
This would not work in the original Lisp because of binding issues: the definition of cons requires the specific a and b bindings be remembered by the returned lambda.)
(b) Lambda calculus does not have any evaluation rules. Rather, it is like algebra where you can try to normalize an expression if you wish, but the point is that some lambda terms are equivalent to others based on some simple rules that model abstract properties of function compositions. Lambda-calculus-based programming languages choose some evaluation rule, but there is no guarantee of convergence: there might be two programs that lambda calculus says are formally equivalent, but one might terminate while the other might not. Depending on how you're feeling, you might say that no PL for a computer can ever realize the lambda calculus, but more pragmatically we can say most languages use lambda calculus with a strict evaluation strategy.
(c) The lambda terms in lambda calculus are not inspectable objects, but more just a sequence of symbols. Perhaps one of the innovations of McCarthy is that lambda terms can be represented using lists, and the evaluator can be written as a list processor (much better than Godel numbering!). In any case, the fact that terms have the ability to evaluate representations of terms within the context of the eval makes things a little different. It's also not too hard to construct a lambda evaluator in the lambda calculus[1], but you don't have the "level collapse" of Lisp.
(d) In lambda calculus, one way to model function application is that you immediately substitute in arguments wherever that parameter is used in the function body. Shadowing is dealt with using a convention in PL known as lexical scoping, and an efficient implementation uses a linked list of environments. In the original Lisp, there was a stack of variable bindings instead, leading to something that is now known as dynamic scoping, which gives different results from the immediate substitution model. Pretty much everything fun you can do with the lambda calculus depends on having lexical scoping.
All this said, the widespread belief about Lisp being the lambda calculus probably comes from Scheme, which was intentionally lambda calculus with a strict evaluation model. Steele and Sussman were learning about actors for AI research, and I think it was Sussman (a former logician) who suggested that their planning language Schemer (truncated to Scheme) ought to have real lambdas. At some point, they realized actors and lambdas (with mutable environments) had the exact same implementation. This led to "Scheme: An Interpreter for Extended Lambda Calculus" (1975) and the "Lambda the ultimate something" papers. Later, many of these ideas were backported to Lisp during the standardization of Common Lisp.
differentiation of algebraic expressions was among the first achievements of Lisp systems.
writing recursive function definitions using conditional expressions allowed combining the base case and the inductive case into a single formula
publication notation. With the conditional expression defined as:
(p1→e1,…,pn→en)(1)
The traditional brace notation for factorial:
fac(n)={1mult(n,fac(n−1)):n=0:n>0(2)
McCarthy's formalism is regarded as a precise and substantial refinement of the partial recursive functions
prf + minimisation operator = mu-rf st a broader set of definable functions, equivalent to those computable by a #Turingmachine
Moschovakis has proposed a mathematical modeling of the notion of algorithm a set-theoretic "definition" of algorithms, much like the "definition" of real numbers as #Dedekind cuts on the rationals or that of random variables as measurable functions on a probability space
.
https://news.ycombinator.com/item?id=20696931
If it was a realization of a lambda calculus, then it is one with (a) primitives, (b) strict evaluation, (c) quoted lambda terms, and (d) "dynamic" bindings.
(a) In classic lambda calculus, everything is a lambda term. McCarthy's Lisp has primitives like lists and numbers. However, it is known that lambda calculus is powerful enough to encode these things as lambda terms (for example,
null = (lambda (n c) (n))
(cons a b) = (lambda (n c) (c a b))
gives a way to encode lists. The car function would be something like
(car a) = (lambda (lst)
(lst (lambda () (error "car: not cons"))
(lambda (a b) a)))
This would not work in the original Lisp because of binding issues: the definition of cons requires the specific a and b bindings be remembered by the returned lambda.)
(b) Lambda calculus does not have any evaluation rules. Rather, it is like algebra where you can try to normalize an expression if you wish, but the point is that some lambda terms are equivalent to others based on some simple rules that model abstract properties of function compositions. Lambda-calculus-based programming languages choose some evaluation rule, but there is no guarantee of convergence: there might be two programs that lambda calculus says are formally equivalent, but one might terminate while the other might not. Depending on how you're feeling, you might say that no PL for a computer can ever realize the lambda calculus, but more pragmatically we can say most languages use lambda calculus with a strict evaluation strategy.
(c) The lambda terms in lambda calculus are not inspectable objects, but more just a sequence of symbols. Perhaps one of the innovations of McCarthy is that lambda terms can be represented using lists, and the evaluator can be written as a list processor (much better than Godel numbering!). In any case, the fact that terms have the ability to evaluate representations of terms within the context of the eval makes things a little different. It's also not too hard to construct a lambda evaluator in the lambda calculus[1], but you don't have the "level collapse" of Lisp.
(d) In lambda calculus, one way to model function application is that you immediately substitute in arguments wherever that parameter is used in the function body. Shadowing is dealt with using a convention in PL known as lexical scoping, and an efficient implementation uses a linked list of environments. In the original Lisp, there was a stack of variable bindings instead, leading to something that is now known as dynamic scoping, which gives different results from the immediate substitution model. Pretty much everything fun you can do with the lambda calculus depends on having lexical scoping.
All this said, the widespread belief about Lisp being the lambda calculus probably comes from Scheme, which was intentionally lambda calculus with a strict evaluation model. Steele and Sussman were learning about actors for AI research, and I think it was Sussman (a former logician) who suggested that their planning language Schemer (truncated to Scheme) ought to have real lambdas. At some point, they realized actors and lambdas (with mutable environments) had the exact same implementation. This led to "Scheme: An Interpreter for Extended Lambda Calculus" (1975) and the "Lambda the ultimate something" papers. Later, many of these ideas were backported to Lisp during the standardization of Common Lisp.
differentiation of algebraic expressions was among the first achievements of Lisp systems.
writing recursive function definitions using conditional expressions allowed combining the base case and the inductive case into a single formula
publication notation. With the conditional expression defined as:
(p1→e1,…,pn→en)(1)
The traditional brace notation for factorial:
fac(n)={1mult(n,fac(n−1)):n=0:n>0(2)
McCarthy's formalism is regarded as a precise and substantial refinement of the partial recursive functions
prf + minimisation operator = mu-rf st a broader set of definable functions, equivalent to those computable by a #Turingmachine
Moschovakis has proposed a mathematical modeling of the notion of algorithm a set-theoretic "definition" of algorithms, much like the "definition" of real numbers as #Dedekind cuts on the rationals or that of random variables as measurable functions on a probability space
.
https://news.ycombinator.com/item?id=20696931