On lisp blogs and discussions, people kept referring to "lambda calculus".
I had never heard of this before on any other context. Is it math? CS? Something else?
I got started with "A tutorial introduction to the lambda calculus" by Raul Rojas. [PDF]
The very start was wild, we have no data-types except for functions!
this is a study of computation using nothing but functions.
The general format of a function is: λ arg . expression
There's only a single argument and it evaluates to expression.
For example:
(λ x . x) is the identity function, it returns it's argument as-is.
A function is applied to another function as: (λ x . x) y -> y
here the y is, of-course, another function. There's nothing but functions!
I'll assume you'll read the paper before or after this, so i'll skip most of the definitions.
Numbers
So, if we have nothing but functions, how do we compute? Calculate 1 + 2 ?
With functions!
We interpret numbers as: 0 means don't do a task.
1 means do a task once, 2 means do it twice.
Thus, 0 ≍ (λ x y . y), 1 ≍ (λ x y . x y)
Let S ≍ (λ n x y . x (n x y)) be the successor function.
It applies x n+1 times to y.
The weird (and seemingly arbitary) grouping of () confuses me;
especially when we encounter recursion.
It is my suspicion that whenever there's a grouping like: x ( y ),
it's actually a new function like:
x ( y ) ≍ (λ . x (y))
The interesting part
So now, with nothing but functions, let's implement conditionals!
A "True" function is a function returning it's first argument:
T ≍ (λ x y . x ), F ≍ (λ x y . y)
Now we have If statements like:
if V { x } { y } is just
V x y
where V evaluates to true or false.
By that very definition, 0 and F are the same functions!
Conditionals
The And function can now be:
ᴧ ≍ (λ x y . x y F)
i.e. if x is true, return y (since true returns first arg) otherwise return F
Similarly or and not can be constructed.
Let Z be the conditional function returning if it's arg was 0 or not:
Z ≍ (λ x . x F ¬ F)
If x is zero, it returns the second arg, thus making it ¬ F ≍ T.
If x is 1, it'll return F(¬) F. Here's an interesting thing:
F ( x ) is always the identity function!
F ≍ (λ x . (λ y . y))
so, Fx -> [x/x](λ y . y)) -> (λ y . y) -> I (identity function!)
Thus F(¬) -> I, F(F(¬)) -> F(I) -> I
So if F ¬ F is evaluated , it'll always become I F -> F!
This test of zero is very useful as we can use it as a base case for recursion!
Counters
We can do counters by repeatedly increasing a value. S(x) is x+1.
To count down, we use a pair of (x+1, x).
The function:
ϕ ≍ (λ p z . z ( S(pT) (pT)))
when given input in the form (λ z . z 0 0)
returns (λ z . z 1 0)
ϕ (λ z . z 0 0)
-> ϕ A
-> [A/p] (λ z . z ( S(pT) (pT)))
-> (λ z . z ( S(AT) (AT)))
-> (λ z . z ( S((λ z . z 0 0)T) ((λ z . z 0 0)T)))
-> (λ z . z ( S(T 0 0) (T 0 0)))
-> (λ z . z ( S(0) 0))
-> (λ z . z ( 1 0))
It was very weird to see pT instead of Tp but it turns out, this is to make the input format same as the output.
The input is a function that runs either True or False on pair of (x+1, x). So is the output.
Since the input is the same as output, we can feed it to itself!
ϕ ϕ (λ z . z 0 0) -> ϕ (λ z . z 1 0) -> (λ z . z 2 1)
so, 2 ϕ (λ z . z 0 0) -> (λ z . z 2 1)
n ϕ (λ z . z 0 0) -> (λ z . z n n-1)
Thus, to decrement, we can select the second (n-1) term from the function with F:
(λ z . z n n-1) F -> F n n-1 -> n-1
So much work just to subtract 1!!!.
Thus, the "subtract 1" function is:
P ≍ (λ n . n ϕ (λ z . z 0 0) F)
With subtraction we are finally ready for!!!!!
Recursion!
Recursion is a function calling to (generally) itself.
A simple function like (λ n . n n) would qualify, but we want something more interesting.
Y = (λ y . (λ x . y (x x)) (λ x . y (x x)))
Notice that this is a function feeding itself!
Just like the (λ n . n n) except this time we have another function that can do work for us.
so if we pass a function R:
Y R -> (λ x . R (x x)) (λ x . R (x x))
-> R ((λ x . R (x x)) (λ x . R (x x)))
-> R(YR)
-> R(R(YR)) -> R(R(R(YR))) -> ...
This recursion "operator" can be used to induce recursion on any function!
Now we just need to add a conditional to this interface to make it bounded.
Fibonacci
If we assume there's a function M(x, y) -> x * y then the fibonacci function could be:
F = (λ n . Y R n )
where R = (λ r n . Z n 0 (M n (r P n)))
The (λ n . Y R n ) is a general format to induce recursion n times.
R = (λ r n . Z n 0 (M n (r P n)))
Here we first test if n is zero (Z n) and if so, return 0. This is lazy evaluation!!!
Else we do (M n (r P n))
which multiplies n with r(P n) -> r n-1. Since Y R n -> R (YR) n, the `r' arg is YR.
r n-1 -> Y R n-1.
And thus bounded recursion is born!
Conclusion
All of this, using nothing but functions!
Initially it looked very weird to limit yourself to nothing but functions.
But then it seems like a very nice way to study computation!
Similar to how turing machines can be used to study a computer having memory and a cpu,
this lambda calculus seems a very nice mechanism to study functions!
Since I only read a tutorial to it, i want to read more on this topic.
Thinking in this functional way is really new way for me!
Especially this is the first time i saw how to use lazy evaluation!