Find file
Fetching contributors…
Cannot retrieve contributors at this time
86 lines (49 sloc) 6.11 KB
= Normal Form =
We talked before about having "targets" for our evaluation of programs in the Untyped Lambda Calculus. Notionally, we should be trying to reduce any program to the "simplest" program, so that it can be interpreted as a "result". If β-reduction is the tool we use to make a program "simpler", then what would the "simplest" program be? It would be a program that cannot be β-reduced. We say that such programs are in "β normal form", or sometimes just "normal form".
For example, the following programs:
x y
Are all in beta normal form. None of them have a possible beta-reduction (sometimes called a "reducible expression" or "redex"). The following programs are all *not* in beta normal form:
Not every program, however, has an equivalent program in normal form. Consider for example the following program:
(λx.x x)(λx.x x)
What happens after a single step of beta-reduction on this program? We get:
(λx.x x)(λx.x x)
Exactly the same thing we started with! If you keep reducing this program, it will keep giving you back the exact same thing, and so it will never be reduced to beta normal form, because a further beta reduction is always possible. We say that such programs "diverge".
= Reduction Strategies =
In order to reduce a program all the way to beta normal form (if possible), we want a strategy for choosing which applications to reduce. Of course, we could always just pick one and reduce it (as we have done so far), but this can be hard to reason about and is certainly more difficult when it comes to building a machine to do it! Let's consider this program:
(λx.x z)((λy.z)w)
There are two possible beta-reductions here. Either we can apply the left lambda abstraction to the whole argument on the right, or else we could first perform the application inside the argument on the right. Intuitively, it seems like it might be easier to do the latter, since then each application that we do will be using the simplest possible form of the argument for substitution.
(λx.x z)((λy.z)w)
(λx.x z)z
z z
That seems to work very well. Reduce arguments to normal form, then apply them. This strategy is called Applicative Order Reduction (or sometimes "leftmost innermost" because that is the expression we always choose to reduce) and modified forms of it are used to implement most programming languages.
There is a potential problem with this strategy, however. Consider the following program:
(λx.z)((λx.x x)(λx.x x))
What will happen here? We will choose to reduce the leftmost innermost expression to normal form before substituting it into the lambda abstraction on the far left. However, as we saw before, the innermost expression we have here diverges. This means that it will never reach normal form. We will keep trying to reduce it forever and never move on. Is this a problem? Perhaps the whole program diverges and so there is no normal form for the program. Upon examination, we see that the program follows the following pattern:
So no matter what we put in the `(...)`, the beta normal form of this program is:
Because the lambda abstraction ignores its argument and `z` is always the result. So Applicative Order Reduction is not guaranteed to always give us beta normal form, even if there is one.
We might consider a different strategy. If Applicative Order Reduction is leftmost innermost, what about leftmost outermost? This strategy is called Normal Order Reduction. Returning to our first example:
(λx.x z)((λy.z)w)
z z
So we see that this strategy can produce the same results as Applicative Order Reduction, but does things in a different order. Now what about our second example?
(λx.z)((λx.x x)(λx.x x))
Because the leftmost outermost expression immediately reduces to `z`, we never even look at the other expression, and so we get the correct beta normal form.
It can be proved that Normal Order Reduction will *always* find beta normal form if one exists. It can also be proven that any reduction strategy we use on a particular program will result in the same normal form expression, unless it fails to find a normal form expression at all.
= The Halting Problem =
We mentioned above that not every program has an equivalent β normal form. Given some program, how can we tell if it does or not?
Well, with the example above:
(λx.x x)(λx.x x)
(λx.x x)(λx.x x)
We could think about writing a program that keeps track of the previous step of β-reduction and if the current program is the same as the last one, then we are not making progress and we know there is no equivalent β normal form. What if a program switches back and forth between two forms? Well, then keep track of two steps. In general, for a program that returns to a form that is α-equivalent to an earlier form, that program will never reach β normal form under the given reduction strategy. Since it can be proven that Normal Order Reduction will always find a normal form if one exists, then under Normal Order Reduction if we return to an earlier form, we know that there does not exist an equivalent normal form.
This is great. If we reduce under Normal Order Reduction and we reach a normal form, then one exists, and if we reach a form that is equivalent to an earlier step, then there does not exist a normal form. What if we have not yet reached a normal form, but we have also not seen an earlier form. Keep reducing? For how long?
The problem is that we have no guarantee that either of these conditions will be reached. This problem is called "The Halting Problem" because it is the problem of computing whether or not a given program will "halt" (reach normal form) or not. There exists a proof that the Halting Problem cannot be solved by a Turing-equivalent system for a general program. Because of this, any problem for which it can be shown a problem equivalent to the Halting Problem must be solved as a subproblem is called "undecidable" or "uncomputable", because it cannot be computed by a Turing-equivalent system (and the Church–Turing thesis asserts that no more powerful system exists).