Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Suggestions for clarity #114

Open
wants to merge 11 commits into
base: master
Choose a base branch
from
7 changes: 6 additions & 1 deletion conv.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ As a first example, let us prove example
(examples in this file are somewhat artificial since
other tactics could finish them immediately). The naive
first attempt is to enter tactic mode and try `rw [Nat.mul_comm]`. But this
transforms the goal into `b * c * a = a * (c * b)`, after commuting the
transforms the goal into `b * c * a = a * (c * b)`, because it commutes the
very first multiplication appearing in the term. There are several
ways to fix this issue, and one way is to use a more precise tool:
the conversion mode. The following code block shows the current target
Expand Down Expand Up @@ -155,8 +155,13 @@ example (g : Nat → Nat) (h₁ : g x = x + 1) (h₂ : x > 0) : g x = f x := by
exact h₁
```


- `enter [1, x, 2, y]` iterate `arg` and `intro` with the given arguments. It is just the macro:

<!-- using the specific example [1, x, 2, y] here is confusing.
For example, the reader might think that only versions this particular form (alternating index, name) can be used.
Adding, say, [1,2,x] as another example would also be useful. -->

```
syntax enterArg := ident <|> group("@"? num)
syntax "enter " "[" (colGt enterArg),+ "]": conv
Expand Down
78 changes: 44 additions & 34 deletions dependent_type_theory.md
Original file line number Diff line number Diff line change
Expand Up @@ -256,8 +256,8 @@ type signature of the function ``List``:
```

Here ``u`` is a variable ranging over type levels. The output of the
``#check`` command means that whenever ``α`` has type ``Type n``,
``List α`` also has type ``Type n``. The function ``Prod`` is
``#check List`` command means that whenever ``α`` has type ``Type u``,
``List α`` also has type ``Type u``. The function ``Prod`` is
similarly polymorphic:

```lean
Expand Down Expand Up @@ -355,7 +355,7 @@ You can also pass types as parameters:
```lean
#check fun (α β γ : Type) (g : β → γ) (f : α → β) (x : α) => g (f x)
```
The last expression, for example, denotes the function that takes
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since there's only one expression in this example, using "last" makes one think that something other than the full expression is being referred to.

This expression, for example, denotes the function that takes
three types, ``α``, ``β``, and ``γ``, and two functions, ``g : β → γ``
and ``f : α → β``, and returns the composition of ``g`` and ``f``.
(Making sense of the type of this function requires an understanding
Expand Down Expand Up @@ -482,30 +482,33 @@ So `def` can also be used to simply name a value like this:
def pi := 3.141592654
```

`def` can take multiple input parameters. Let's create one
`def` can take multiple input parameters. Let's define a function
that adds two natural numbers:

```lean
def add (x y : Nat) :=
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When the add (x y : Nat) example comes first, a newcomer might think that it's a function of two arguments (no currying). So it seems better to have add (x Nat) (y : Nat) come first, and then say that it can be abbreviated.

def add (x Nat) (y : Nat) :=
x + y

#eval add 3 2 -- 5
```

The parameter list can be separated like this:
For convenience, the parameter list can be abbreviated as follows:

```lean
# def double (x : Nat) : Nat :=
# x + x
def add (x : Nat) (y : Nat) :=
def add (x y : Nat) :=
x + y

#eval add (double 3) (7 + 9) -- 22
```
The resulting function will display a different *signature*, but the function itself will have the same type as before.

Notice here we called the `double` function to create the first
Notice that we used the `double` function to create the first
parameter to `add`.

<!-- a quick note about currying would be in order here, and perhaps a definition of "signature" -->

You can use other more interesting expressions inside a `def`:

```lean
Expand All @@ -517,7 +520,7 @@ def greater (x y : Nat) :=
You can probably guess what this one will do.

You can also define a function that takes another function as input.
The following calls a given function twice passing the output of the
The following calls a given function twice, passing the output of the
first invocation to the second:

```lean
Expand All @@ -530,30 +533,30 @@ def doTwice (f : Nat → Nat) (x : Nat) : Nat :=
```

Now to get a bit more abstract, you can also specify arguments that
are like type parameters:
are like type parameters,
such as `(α β γ : Type)` here:

```lean
def compose (α β γ : Type) (g : β → γ) (f : α → β) (x : α) : γ :=
g (f x)
```

This means `compose` is a function that takes any two functions as input
arguments, so long as those functions each take only one input.
The type algebra `β → γ` and `α → β` means it is a requirement
This defines `compose` as a function that takes three type arguments (`α β γ : Type`) and two other functions as arguments,
locally named `f` and `g`. Furthermore,
the type constraints `β → γ` and `α → β` require
that the type of the output of the second function must match the
type of the input to the first function - which makes sense, otherwise
the two functions would not be composable.

`compose` also takes a 3rd argument of type `α` which
it uses to invoke the second function (locally named `f`) and it
passes the result of that function (which is type `β`) as input to the
first function (locally named `g`). The first function returns a type
`γ` so that is also the return type of the `compose` function.
`compose` also takes one last argument `x` of type `α`, which
it uses to invoke the second function (`f`).
The result of that function, of type `β`, is passed as the input to the
first function (`g`). The first function returns a result of type
`γ`, so that is also the return type of the `compose` function.

`compose` is also very general in that it works over any type
`α β γ`. This means `compose` can compose just about any 2 functions
so long as they each take one parameter, and so long as the type of
output of the second matches the input of the first. For example:
`compose` is also very general in that it works over any types
`α β γ`. This means `compose` can compose just about any two functions
as long as the output type of the second matches the input type of the first. For example:

```lean
# def compose (α β γ : Type) (g : β → γ) (f : α → β) (x : α) : γ :=
Expand All @@ -566,6 +569,10 @@ def square (x : Nat) : Nat :=
#eval compose Nat Nat Nat double square 3 -- 18
```

Lean has syntax to make the three type parameters (`α β γ`) implicit,
so that they don't have to be explicitly included each time that `compose` is called ---
see the [Implicit Arguments](#Implicit-Arguments) section below.

Local Definitions
-----------------

Expand Down Expand Up @@ -733,12 +740,13 @@ open Foo
#check Foo.fa
```

When you declare that you are working in the namespace ``Foo``, every
identifier you declare has a full name with prefix "``Foo.``". Within
the namespace, you can refer to identifiers by their shorter names,
but once you end the namespace, you have to use the longer names.
Unlike `section`, namespaces require a name. There is only one
anonymous namespace at the root level.
Inside the scope of the namespace ``Foo``, every
identifier you declare is given a full name with prefix "``Foo.``".
Within the namespace, you can use the shorter names,
but once you end the namespace, you must use the longer, *fully qualified* names.

Unlike `section`, namespaces require a name. (There is only one
anonymous namespace: the one at the root level.)

The ``open`` command brings the shorter names into the current
context. Often, when you import a module, you will want to open one or
Expand Down Expand Up @@ -1076,10 +1084,10 @@ used to specify the desired types of the expressions ``id`` and
```

Numerals are overloaded in Lean, but when the type of a numeral cannot
be inferred, Lean assumes, by default, that it is a natural number. So
be inferred, Lean assumes, by default, that it is a natural number. Thus,
the expressions in the first two ``#check`` commands below are
elaborated in the same way, whereas the third ``#check`` command
interprets ``2`` as an integer.
both elaborated as `Nat`, whereas the third ``#check`` command
interprets ``2`` as an `Int`.

```lean
#check 2 -- Nat
Expand All @@ -1102,6 +1110,8 @@ made explicit.
#check @id Bool true -- Bool
```

Notice that now the first ``#check`` command gives the type of the
identifier, ``id``, without inserting any placeholders. Moreover, the
output indicates that the first argument is implicit.
Notice that the ``#check @id`` command now gives the full type of the
identifier, ``id``, without any placeholders. Moreover, the
curly brackets in the
output indicate that the first argument, `α`, is implicit.

2 changes: 2 additions & 0 deletions interacting_with_lean.md
Original file line number Diff line number Diff line change
Expand Up @@ -290,6 +290,8 @@ theorem List.isPrefix_self (as : List α) : as ≤ as :=
⟨[], by simp⟩
```

<!-- This example is confusing because the `instance:...` command does not mention ≤ -->

That assignment can also be made local:

```lean
Expand Down
22 changes: 13 additions & 9 deletions propositions_and_proofs.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,12 +13,12 @@ One strategy for proving assertions about objects defined in the
language of dependent type theory is to layer an assertion language
and a proof language on top of the definition language. But there is
no reason to multiply languages in this way: dependent type theory is
flexible and expressive, and there is no reason we cannot represent
flexible and expressive, and it turns out that we can represent
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Rewording this positively makes it more clear that the following discussion is not meant to be an example of layering languages on top. (This could, perhaps, be made a little more explicit still.)

assertions and proofs in the same general framework.

For example, we could introduce a new type, ``Prop``, to represent
For example, we can introduce a new type, ``Prop``, to represent
propositions, and introduce constructors to build new propositions
from others.
from others:

```lean
# def Implies (p q : Prop) : Prop := p → q
Expand Down Expand Up @@ -49,7 +49,7 @@ variable (p q : Prop)
#check and_comm p q -- Proof (Implies (And p q) (And q p))
```

In addition to axioms, however, we would also need rules to build new
In addition to axioms, however, we also need rules to build new
proofs from old ones. For example, in many proof systems for
propositional logic, we have the rule of *modus ponens*:

Expand Down Expand Up @@ -77,7 +77,8 @@ We could render this as follows:
axiom implies_intro : (p q : Prop) → (Proof p → Proof q) → Proof (Implies p q)
```

This approach would provide us with a reasonable way of building assertions and proofs.
This approach would provide us with a reasonable way of building assertions and proofs,
staying inside the framework of dependent type theory.
Determining that an expression ``t`` is a correct proof of assertion ``p`` would then
simply be a matter of checking that ``t`` has type ``Proof p``.

Expand Down Expand Up @@ -136,7 +137,7 @@ that even though we can treat proofs ``t : p`` as ordinary objects in
the language of dependent type theory, they carry no information
beyond the fact that ``p`` is true.

The two ways we have suggested thinking about the
The two ways we have suggested for thinking about the
propositions-as-types paradigm differ in a fundamental way. From the
constructive point of view, proofs are abstract mathematical objects
that are *denoted* by suitable expressions in dependent type
Expand All @@ -156,7 +157,7 @@ syntax and semantics by saying, at times, that a program "computes" a
certain function, and at other times speaking as though the program
"is" the function in question.

In any case, all that really matters is the bottom line. To formally
In any case, all that really matters is the bottom line: To formally
express a mathematical assertion in the language of dependent type
theory, we need to exhibit a term ``p : Prop``. To *prove* that
assertion, we need to exhibit a term ``t : p``. Lean's task, as a
Expand Down Expand Up @@ -191,7 +192,7 @@ true.
Note that the ``theorem`` command is really a version of the
``def`` command: under the propositions and types
correspondence, proving the theorem ``p → q → p`` is really the same
as defining an element of the associated type. To the kernel type
as defining an element of the associated type. To the Lean kernel type
checker, there is no difference between the two.

There are a few pragmatic differences between definitions and
Expand Down Expand Up @@ -261,7 +262,7 @@ theorem t2 : q → p := t1 hp
```

The ``axiom`` declaration postulates the existence of an
element of the given type and may compromise logical consistency. For
element of the given type, and may compromise logical consistency. For
example, we can use it to postulate that the empty type `False` has an
element:

Expand Down Expand Up @@ -321,6 +322,9 @@ be written ``∀ (p q : Prop) (hp : p) (hq : q), p``, since the arrow
denotes nothing more than an arrow type in which the target does not
depend on the bound variable.

<!-- Perhaps point out that `variable {p : Prop}` does not mean that `p` is an axiom, but
`variable (hp : p)` does, letting us prove `theorem pholds : p := hp` -->

When we generalize ``t1`` in such a way, we can then apply it to
different pairs of propositions, to obtain different instances of the
general theorem.
Expand Down
Loading
Loading