Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Address a "setoid hell"? #10871

Open
XVilka opened this issue Oct 10, 2019 · 45 comments
Open

Address a "setoid hell"? #10871

XVilka opened this issue Oct 10, 2019 · 45 comments

Comments

@XVilka
Copy link

@XVilka XVilka commented Oct 10, 2019

Yesterday I watched the video by Prof Buzzard about Lean and there was a question of how it compares to Coq (obviously), and he replied basically that in Lean it's straightforward to express quotients, while in Coq you get "setoid hell". This specific statement is at 1:04:20 in the video. The question is at 1:00:02 and is probably useful for context.

Sorry, I am a newbie to Coq and never tried Lean (yet), but for him, it was the deal-breaker it seems, so maybe it's possible to address this problem somehow? Note, that his goal is not merely proof something, but to build a substantial corpus of the mathematical foundation, similar to the Metamath and Univalent Foundations, whom he said didn't achieve substantial progress in the past decades, while he did it more efficiently with Lean.

On how quotients are implemented in Lean see their Chapter 11 (Axioms and Computation) of the Theorem Proving in Lean book.

I found a few relevant discussions over the internet:

There is some quotients implementation in the Coq-contribs, but not sure how it's useful https://github.com/coq-contribs/rational/tree/master/Quotient

Would be nice to know what the Coq developers themselves think about this.

P.S. His second complain is the Unicode tokens and terms out of the box in Lean, while in Coq you have to either use Unicode.Utf8 library (which is rather limited compared to Lean), or define something less popular by yourself.

@ppedrot
Copy link
Member

@ppedrot ppedrot commented Oct 10, 2019

Quotient types are well known to be tricky in type theory. As far as I know, the only sensible way to allow them is to change the notion of equality to a more extensional one like SetoidTT (which gives you amongst others funext and uip) or CubicalTT (which gives you univalence). In particular, in these systems equality is not defined as an inductive type and proofs of x = x may not reduce to reflexivity.

Lean does not escape this limitation, as it defines the equality type inductively as in Coq. Rather, in a mastermind PR campaign, they successfully convinced non-experts of type theory that they could give them quotient types without breaking everything around.

The first thing they do is to axiomatize quotient types, which is somewhat fine and can be done the same in Coq. As any use of axioms, this breaks canonicity, meaning your computations may get stuck on some opaque term. (While slightly cumbersome, axiomatizing quotients already solves 99,9% of the problems of the working mathematician).

Now, were they do evil things that would make decent type theorists shake in terror, they add a definitional reduction rule over the quotient induction principle. We could do the same in the Coq kernel, but there is a very good reason not to do this. Namely, this definitional rule breaks subject reduction, which is a property deemed more important than loss of canonicity.

In Lean, you can write a term t : A where t reduces to u, but u doesn't have type A (or equivalently may not typecheck at all). This is BAD for a flurry of reasons, mark my words. Reinstating it while keeping quotients requires an undecidable type checking algorithm, which is another can of worms you don't want to wander into. The same kind of trouble arises from similar Lean extensions like choice.

To be honest, Coq also fails at SR, but for historical design issues that ought to be fixed at some point. Coinductive types are well-known to break SR due to poor understanding of their consequences at the time they were introduced. See my PR #10764 for more details about why this happens and how to fix it (which would correspond in quotient types to restrictions on the elimination principle to predicates preserving definitional equality).

Everything is not hopeless though. We have been working recently on rewrite rules for Coq, which might provide you with the ability to define quotients that compute and break good properties of CIC in an opt-in and generic way, not relying on hardwired properties in the kernel. Another likely candidate for well-behaved quotients is the extension of Coq to CoqMT, which preserves the good meta-theoretical properties of CIC at the cost of only allowing decidable quotients.

Alternatively, it is possible to embed SetoidTT into Coq + SProp via a syntactic translation, which would allow you to define a new theory atop of Coq and reason inside it, but this currently has quite a few drawbacks, even though we are also working on making this kind of extension more transparent.

TL;DR: Lean support for quotients is FUD. They break an essential metatheoretical property of CIC. Coq people care for this kind of properties, and prefer extending the theory much more slowly at a cost of reduced expressivity (but what use is expressivity of a broken system? You could work in Set theory for that matter).

People that I know have something to say in this thread include @amahboubi, @tabareau and @TheoWinterhalter, so let me ping them.

Loading

@jespercockx
Copy link

@jespercockx jespercockx commented Oct 10, 2019

Thanks @ppedrot for the in-depth explanation. I didn't know Lean actually broke subject reduction in this way!

In Agda, you can currently use --cubical mode to define quotients as a higher inductive type, preserving both canonicity and subject reduction. However, in my experience it is currently still a bit cumbersome to work with. In particular, working with interval variables rather than pattern matching on refl can get annoying (or maybe it just takes some time getting used to). I'm very curious if a variant of cubical with UIP (e.g. XTT) will provide a solution.

I have also seen people encoding quotients in Agda using a combination of (strict) Prop and rewrite rules, see e.g. the initiality project. This looks very similar to what happens in Lean in that you postulate quotients and then add the computation rule for the eliminator as a rewrite rule. It seems like this might cause the same problem with subject reduction that @ppedrot describes (the current implementation of rewrite rules in Agda is unsafe in that way).

Loading

@SkySkimmer
Copy link
Contributor

@SkySkimmer SkySkimmer commented Oct 10, 2019

@ppedrot what about the reduction rule for quotients breaks SR?


You can define the primitives for Lean quotients with a HIT-style private inductive:

Module Export quot. (* simulating Lean namespace with module export kinda works *)

  Section quot.
    Variables (A:Type) (R:A->A->Prop).

    Private Inductive quot := mk : A -> quot.

    Definition quot_lift B (f:A -> B) (q:quot) : (forall a b, R a b -> f a = f b) -> B
      := match q with mk x => fun _ => f x end.

    Definition quot_ind (P:quot -> Prop) (f:forall a, P (mk a)) (q:quot) : P q
      := match q with mk x => f x end.
  End quot.

End quot.

Then you add some axioms. Private inductives can be a bit awkward wrt to what they guarantee, see eg #9609, but it basically works.
(this isn't really a hit because you only have dependent elimination to assumed-proof-irrelevant Prop)

They don't use this directly though, usually they go through the derived notion of

Module Export setoid.
  Class setoid (A:Type) := { r : A -> A -> Prop; iseqv : Equivalence R }.
End setoid.
Definition quotient {A:Type} (r:setoid A) := @quot A (@setoid.r A r).

so there's so typeclass support allowing eg Notation "⟦ a ⟧" := (@quot.mk _ (@setoid.r _ _)).

There's also the difference that they have equality in definitionally proof irrelevant Prop, and Prop is cumulative with everything else. This is unlike us where equality is not yet available in SProp, SProp doesn't have Acc, SProp is not cumulative and anyway the stdlib uses Prop and the Prop equality everywhere. EDIT actually it seems lean doesn't have cumulativity
I don't know how important it is to have irrelevant equalities when working with quotients though (assuming you have quot_ind instead of having to derive it through the regular eliminator at least).

Loading

@ppedrot
Copy link
Member

@ppedrot ppedrot commented Oct 10, 2019

Private inductives can be a bit awkward wrt to what they guarantee [...] but it basically works.

Reference needed. I'm confident I can find a SR breakage with private inductive types, I think we actually had an example of that.

Loading

@Zimmi48
Copy link
Member

@Zimmi48 Zimmi48 commented Oct 10, 2019

Warning: not a technical post

Yesterday I watched the video by Prof Buzzard about Lean and there was a question of how it compares to Coq (obviously), and he replied basically that in Lean it's straightforward to express quotients, while in Coq you get "setoid hell". This specific statement is at 1:04:20 in the video. The question is at 1:00:02 and is probably useful for context.

Sorry, I am a newbie to Coq and never tried Lean (yet), but for him, it was the deal-breaker it seems, so maybe it's possible to address this problem somehow? Note, that his goal is not merely proof something, but to build a substantial corpus of the mathematical foundation, similar to the Metamath and Univalent Foundations, whom he said didn't achieve substantial progress in the past decades, while he did it more efficiently with Lean.

I wonder how much truth vs bulllshit there is in this statement giving the reason why Kevin Buzzard thinks Lean is better than Coq for the project of formalizing mathematics. I have serious doubts that he has given any chance to Coq. Indeed, I have earlier evidence that his initial reason for choosing Lean was not this: https://mathoverflow.net/questions/311071/which-mathematical-definitions-should-be-formalised-in-lean

In this post from just a year ago he says:

15 months ago I had never heard of the Lean Theorem Prover, and I had never used anything like a theorem prover in my life. Then in July 2017 I watched a live stream (thank you Newton Institute!) of Tom Hales' talk in Cambridge, and in particular I saw his answer to Tobias Nipkow's question 48 minutes in.

So this is what convinced him to try out Lean, but do you know what Hales gave as an explanation for this choice at this time? Only this "I thought Lean, because HOL foundations are a bit weak". At this point, he could have given exactly the same justification for using Coq, and Kevin Buzzard would be doing advertisement for Coq instead.

Earlier in another discussion, @mdahas seemed to know that Thomas Hales had decided not to use Coq, not because of some theoretical grounds, but because of basic usability concerns such as the availability of good documentation (it was a long time ago):

If the next Thomas Hales appears, what should he do next? (And if you think that is a ridiculous thing to say that, you didn't asked Thomas Hales the questions I did.)

I am sorry that Coq failed to convince him at the time, but we have made a lot of progress since then.

The thing that is ridiculous is that Thomas Hales is a very famous mathematician who seems to know what he is talking about regarding formalization of mathematics because of his formalization of his proof of the Kepler conjecture. But he did not do this using Lean! And now his project is https://formalabstracts.github.io/, very far from actual large-scale formalization of mathematics. So he hasn't got the same experience doing actual formalization work in Lean as he did earlier and cannot know how much it works at scale (he is also not in the contributors of the mathlib project, which virtually includes every math ever formalized in Lean: https://github.com/leanprover-community/mathlib/graphs/contributors).

So my conclusion is that probably none of the very vocal Lean supporters actually know what they are talking about when they compare Lean to Coq (albeit some other Lean users do know both systems). Of course, I would be very happy to be proved wrong.

Sorry for hijacking this technical issue with such a rant, I am just fed up with this "mastermind public relation campaign" as Pierre-Marie put it.

Loading

@gares
Copy link
Member

@gares gares commented Oct 10, 2019

Kevin Buzzard thinks Lean is better than Coq for the project of formalizing mathematics. I have serious doubts that he has given any chance to Coq

I asked him this question in person at ITP. He was an invited speaker and, in that occasion, he did not compare Coq to Lean, but rather say that what we prove with these tools is not very convincing for a mathematician... His answer to my question was that, to him, Coq, Lean, ... look roughly the same, but colleagues like Hales were using Lean, so he tried it too.
Well, I guess that answer was less sexy than some bold claim, but I also imagine you get less pressure to say something strong when you are asked a question at breakfast by a person you don't know ;-)

I've also asked the same questions to others, and other than the social aspect the reason was ergonomics (UI, installation, doc). In a way, especially when you are approaching a new system, these things are way more important than, well, features. So, I guess we've still quite some work ahead...

Loading

@mattam82
Copy link
Member

@mattam82 mattam82 commented Oct 10, 2019

@SkySkimmer if quot_lift doesn’t guard at all on the fact that the function is equivariant for related terms to produce its output that’s probably where the devil lies.

Loading

@mattam82
Copy link
Member

@mattam82 mattam82 commented Oct 10, 2019

I would add that math-comp didn’t seem to do too bad with their encoding of effective quotients in pure CIC.

Loading

@SkySkimmer
Copy link
Contributor

@SkySkimmer SkySkimmer commented Oct 10, 2019

@SkySkimmer if quot_lift doesn’t guard at all on the fact that the function is equivariant for related terms to produce its output that’s probably where the devil lies.

It may be inconsistent with the r a b -> mk a = mk b axiom but I don't see how it could impact SR.

Loading

@andres-erbsen
Copy link
Contributor

@andres-erbsen andres-erbsen commented Oct 11, 2019

@ppedrot, could you please explain what reduction behavior of Lean's quotients is problematic? Here is the file I was experimenting with, in case that is helpful... (I'm mostly asking out of mathematical curiosity, though).

prelude
import init.data.quot init.logic

inductive notquot {A : Type} (R : A -> A -> Prop) : Type
| mk (_ : A) : notquot

-- this succeeds due to the built-in primitive reduction rules for eliminators of inductive types
def notquot_ind_beta {A R} {B : notquot R → Prop} (f : ∀ a, B (notquot.mk R a)) (a : A)
  : (notquot.rec f (notquot.mk R a) : B (notquot.mk R a)) = f a
  := rfl

-- quot.lift reduces the same way as the the eliminator of the inductive type above
-- it is restricted to non-dependent Proper continuations
-- this is similar to what is achieved to careful use of Private Inductive in coq,
-- and in my eye the most surprising thing here, so is this where the problem lies?
def lift_beta {A} {R : A → A → Prop} {B} (f : A → B) (pf : ∀ a b, R a b → f a = f b) (a : A)
  : (quot.lift f pf (quot.mk R a)   : B                 ) = f a
  := rfl

-- note that quot.ind reduces because of judgemental proof irrelevance
-- it is restricted to continuations constructing propositions
def ind_beta {A R} {B : quot R → Prop} (p : ∀ a, B (quot.mk R a)) (a : A)
  : (quot.ind p (quot.mk R a) : B (quot.mk R a)) = p a
  := rfl

-- on the other hand, quot.sound does not have special reduction behavior
def T := quot (λ x y : nat, x=x)
def collapse_mk_T (x y : nat) : (quot.mk _ x : T) = quot.mk _ y := quot.sound rfl
def zer : T := quot.mk _ 0
def one : T := quot.mk _ 1
def one_zer : one = zer := collapse_mk_T _ _
def zer_one : zer = one := collapse_mk_T _ _
inductive vec (t : Type) : T -> Type
| nil : vec (quot.mk _ 0)
| cons n (x:t) (xs : vec (quot.mk _ n)) : vec (quot.mk _ (nat.succ n))
def a := vec.nil nat
def b := @eq.rec T zer (vec nat) (vec.nil nat) one zer_one
def c := @eq.rec T one (vec nat) b zer one_zer
#check (rfl : a = c)
-- 80:1: ⁇ : a = c
-- 
-- 80:9: invalid type ascription, term has type
--   ?m_2 = ?m_2
-- but is expected to have type
--   a = c

-- EDIT to add:
#check (rfl : @eq.rec nat 0 (λ _, nat) _ 0 rfl = 0) --yes
#check (rfl : @eq.rec nat 0 (λ _, nat) _ 0 sorry = 0) --yes
#check (rfl : @eq.rec nat 0 (λ _, nat) _ 1 sorry = 0) --no
#check (rfl : quot.mk (λ x y, x=x) 0 = quot.mk (λ x y, x=x) 1))--no
#check (collapse_mk_T _ _ : quot.mk (λ x y, x=x) 0 = quot.mk (λ x y, x=x) 1)--yes

Loading

@ppedrot
Copy link
Member

@ppedrot ppedrot commented Oct 12, 2019

@andres-erbsen the issue is described in Section 3.1 of this document summarizing the Lean type system.

Loading

@SkySkimmer
Copy link
Contributor

@SkySkimmer SkySkimmer commented Oct 12, 2019

That doesn't mention any quotient related issues.

Loading

@ppedrot
Copy link
Member

@ppedrot ppedrot commented Oct 12, 2019

Look at the end of Section 3.1.1. Quotients of propositions create non-transitivity of algorithmic conversion, which in turn breaks SR.

Loading

@SkySkimmer
Copy link
Contributor

@SkySkimmer SkySkimmer commented Oct 12, 2019

Loading

@andres-erbsen
Copy link
Contributor

@andres-erbsen andres-erbsen commented Oct 12, 2019

Thanks for the pointer! I transcribed the example (to the best of my ability) and observed a different weird behavior than described in the paper.

constant P : Prop
constant R : P -> P -> Prop
constant A : Type
constant f : P -> A
constant H : ∀ x y, R x y -> f x = f y
constant q : @quot P R
constant h : P

#check (@quot P R : Prop)    -- yes
#check (quot.lift f H q : A) -- yes
#check (rfl :               q =                quot.mk R h       ) -- yes
#check (rfl : quot.lift f H q = quot.lift f H (quot.mk R h)      ) -- no (3.1.1 says this should go through)
#check (rfl :                   quot.lift f H (quot.mk R h) = f h) -- yes
#check (rfl : quot.lift f H q =                               f h) -- no

Loading

@SkySkimmer
Copy link
Contributor

@SkySkimmer SkySkimmer commented Oct 12, 2019

So algorithmic equality is also not a congruence
however you can do

def ap (A:Prop) (B:Type) (f:A -> B) (x y:A) : f x = f y := rfl

#check (ap _ _ (quot.lift f H) q (quot.mk R h) : quot.lift f H q = quot.lift f H (quot.mk R h))

Loading

@digama0
Copy link

@digama0 digama0 commented Oct 13, 2019

@andres-erbsen The reduction relation in the paper is not deterministic, because the actual algorithm that lean implements relies on a number of heuristics outside the type system proper. So what I call "algorithmic equality" is still an overestimate of lean's typechecker, although it is finer than "definitional equality", which is undecidable but has all the nice properties like SR.

Loading

@digama0
Copy link

@digama0 digama0 commented Oct 13, 2019

The fact that quotients of propositions break SR is basically a curiosity, due to a poor choice in the core axioms to have quotients live in Sort u instead of Sort (max 1 u). Making this change affects essentially nothing and restores the nice properties. No one would ever want to quotient a proposition because it's always a no-op. The bigger issue is the interaction of subsingleton elimination and proof irrelevance, through the acc type that is used to do well founded recursion; quotients, even with the computational rule, don't cause any problems I know of.

Loading

@Matafou
Copy link
Contributor

@Matafou Matafou commented Oct 25, 2019

Hi,

The fact that quotients of propositions break SR is basically a curiosity, due to a poor choice in the core axioms to have quotients live in Sort u instead of Sort (max 1 u). Making this change affects essentially nothing and restores the nice properties.

Hi, @digama0 Can you elaborate on this please?

My (very classical) feeling is that either

  • quotient are useless because we cannot compute on them
  • or quotient must be SR (because we can compute on them).

Best

Loading

@Matafou
Copy link
Contributor

@Matafou Matafou commented Oct 25, 2019

Ho you meant quotient of propositions. Sorry for the noise.

Loading

@Matafou
Copy link
Contributor

@Matafou Matafou commented Oct 25, 2019

Sorry to ask naive question.

What is the status of justification of consistency in Lean? Strictly speaking Coq as it is implemented has not been proved correct but some convincing (to me at least) versions of CIC have been. These are convincing to me precisely because Coq's conversion is still very close to these versions (except for the experimental SProp which looks promising to me especially concerning the treatment of quotients). Is there some papers justifying the enrichment of the definition equality in lean?

Loading

@digama0
Copy link

@digama0 digama0 commented Oct 25, 2019

@Matafou Consistency of lean was a major concern for me given the storied history Coq has had and the fact that this is still an open question (!). The consistency of lean relative to ZFC + omega inaccessibles is proven, along with many of the metatheoretic facts in this thread, in my master's thesis: https://github.com/digama0/lean-type-theory/releases/tag/v1.0

Loading

@ppedrot
Copy link
Member

@ppedrot ppedrot commented Oct 25, 2019

@Matafou I might sound slightly iconoclastic, but consistency is a fairly weak property of your type theory, which is not even necessarily desirable in some specific contexts, e.g. your favourite Turing-complete programming language proves that the empty type is inhabited and you don't complain about that. And also Gödel incompleteness, you know the drill. The fact that most consistency proofs are somewhat trivial (barring the devil that lies in the details) are also a good indicator that this is not the interesting property.

More important properties to me are progress, strong normalization, subject reduction and decidability of type-checking. With that you don't have to trust whatever crazy meta-theory you live in, just throw the term at the type-checker, if it types normalize it, and inspect the result when hereditarily positive.

Loading

@digama0
Copy link

@digama0 digama0 commented Oct 25, 2019

@ppedrot To me the most important property is not consistency, but rather soundness relative to the model that we carry around in our heads. But this could just be a culture clash thing. Certainly mathematicians care more about soundness than strong normalization. I have a lot of difficulty "believing" a type system with strong normalization and a weird metatheory, because it's usually not an option to actually normalize the term - knowing something normalizes "eventually" for some value of "eventually" longer than my lifetime seems not so useful, so I have to understand the term now, and that means grappling with the weird metatheory. (I will note that strong normalization itself requires some strong metatheoretic assumptions.)

Loading

@ppedrot
Copy link
Member

@ppedrot ppedrot commented Oct 25, 2019

@digama0 forgot to answer on your answer about quotients, let me do it now. Even assuming you fix the SR breakage due to quotient types, you still have canonicity issues which are not solved by definitional UIP, due to the axiomatic postulate of quotient equality that might get in the way.

To properly get rid of this limitation, as explained before you'd need somehow to implement HoTT, or more likely OTT in the case of Lean because you hardwired UIP. I've heard it was possible to justify OTT / SetoidTT by a syntactic translation into CIC + strict prop + UIP in the style of Cockx et al. which would guarantee the kind of properties I mentioned above, but that'd still change the theory.

And as a last point, arguably Lean quotients are still quite weak. I know there was a discussion about that recently which I can't find again, but you need various forms of choice to make the most of them. And choice is another kind of beasts that are going to severely clash with the good aforementioned metatheoretical properties.

Loading

@digama0
Copy link

@digama0 digama0 commented Oct 25, 2019

And as a last point, arguably Lean quotients are still quite weak. I know there was a discussion about that recently which I can't find again, but you need various forms of choice to make the most of them. And choice is another kind of beasts that are going to severely clash with the good aforementioned metatheoretical properties.

Luckily, this isn't a problem, because lean users eat choice for breakfast. Seriously, you wouldn't believe how much all this is a non-issue. We don't use definitional equality at all, or as close to that as we can manage. Overuse of defeq is a Bad Thing and will get you into trouble with the type theory gods, because lean's type theory doesn't have all the nice things.

What it does have is a simple model in which types are like sets, quotients are like quotients, and mathematicians can use choice all day without getting heckled by constructivists. The only time we care about computation is when we use the VM evaluator, which uses a slightly different (type-erasing) reduction relation. This reduction has no problems at all with quotients, so they are considered "computable".

There are three axioms in lean: propositional extensionality, quotient soundness, and choice. All three cause canonicity to fail (and canonicity in the absence of all of them is open but probably true), but only choice blocks canonicity in VM evaluation. I haven't worked out all the details but I believe there is a realizability model that can make sense of the VM evaluation map. As long as that is sound we couldn't care less about reduction in the logic, which is just like VM evaluation except it is slow and useless.

That is perhaps slightly exaggerated but should give you a sense of what I mean by "culture clash". Yes, Lean and Coq are both based on CIC, but they are culturally very distinct.

Loading

@ppedrot
Copy link
Member

@ppedrot ppedrot commented Oct 25, 2019

@digama0 after a few thorough trollish discussions on the topic recently, I think that this is also my understanding of the situation. Unfortunately, mathematicians are not quite the right population to ask about logical foundations, just as computer scientists shouldn't be asked about, say, algebraic geometry.

There are good practical reasons though why type theorists cherish properties like normalization or subject reduction. If it wasn't for a bunch of tenacious computer scientists, we'd still be stuck with Metamath style first-order classical set theory, and people claiming that proving 2+2=4 requires on the order of 10¹⁵ symbols.

(I cannot help linking this long article which gives a baffling account of the deep disdain and general lack of knowledge on logic from the Bourbakists. These people wrote reference works on logical foundations and negatively impacted the general perception of logic of the mathematical community from a whole country for more than 50 years. Seriously.)

Loading

@ppedrot
Copy link
Member

@ppedrot ppedrot commented Oct 25, 2019

I haven't worked out all the details but I believe there is a realizability model that can make sense of the VM evaluation map.

I'm willing to bet a pint of fancy beer that this will not work for open terms, and thus that you'd get nonsense results if you start adding axioms around.

Loading

@digama0
Copy link

@digama0 digama0 commented Oct 25, 2019

It doesn't work for open terms. The VM is like a computer, it works on closed terms only.

Loading

@digama0
Copy link

@digama0 digama0 commented Oct 25, 2019

Heh, you know I'm a Metamath user? That number is completely false, or rather misrepresentative. If you unfold all definitions and don't collapse common subterms it's probably about that, but no one would ever do that. The actual number is far closer to a few thousand symbols, and that's building the entire real number system first. Yet that seems to be your suggestion regarding utilizing strong normalization in practice - just unfold everything and see what you get.

My personal stance is that you only need a simple logic to do mathematics. Everything else, like a fancy type system, is a matter of user interaction. If that has nice properties, great. If not, it still doesn't matter as long as it helps you prove the theorems that matter to you.

Loading

@ppedrot
Copy link
Member

@ppedrot ppedrot commented Oct 25, 2019

@digama0 slightly hijacking the thread, but I'm curious on how you'd realize the "Santa Claus choice" aka squash A -> A via realizability? Do you go full Krivine and add a fork operator, or are you doing something cleverer?

Loading

@digama0
Copy link

@digama0 digama0 commented Oct 25, 2019

I'm not quite sure whether the lean version of squash A is trunc A, the quotient by the total relation (which lives in the same universe as A), or nonempty A, which lives in Prop and is therefore a subsingleton. If it means the latter then it's easy: nonempty A -> A is exactly the choice axiom, and it's not computable.

In the former case, trunc A -> A is computable, and the reason is because trunc A := quot A (\_ _, true) and quot A R is represented in the VM as the same thing as A. (Well, both these types are erased, but what I mean to say is that the valid realizers of quot A R are the same as the realizers of A.) So trunc A -> A is an identity function in the realizability semantics.

The realizers of quot A R are the same, but the equality relation on them is different; it is now generated by the relation R. You should be able to achieve this with some kind of PER model, but I have to admit I am out of my depth on these matters.

Loading

@digama0
Copy link

@digama0 digama0 commented Oct 25, 2019

This leads to a slightly peculiar situation where there are two ways to define a function trunc A -> A.

  • The first is to use the realizability semantics identity function and ignore the fact that this is not functional (does not respect equality). This is lean's trunc.unquot and it is meta meaning that it lives in the land where everything computes but logic is broken (i.e. like haskell).
  • The second way is to use the axiom of choice to pick an element satisfying A (the existence of which is guaranteed by trunc A). This is valid in the logic, but it is not computable anymore because it uses the choice axiom. It is called trunc.out.

Loading

@kyoDralliam
Copy link
Contributor

@kyoDralliam kyoDralliam commented Oct 26, 2019

@digama0 I'm a bit puzzled by your answer on choice not being computable: how would you build a realizability model if you have a non-computable constant in your language ?
My understanding of a realizability model is that all the terms of your language are modeled by objects that "compute", at least wrt your base model of computation which can contain hairy things like exceptions, callcc, fork, etc...

Loading

@digama0
Copy link

@digama0 digama0 commented Oct 26, 2019

@kyoDralliam Lean has a viral marker called noncomputable that defines a subset of things outside of which VM code will be created, so that defines the "computable" subset of interest.

This subset is allowed to mention choice, but only indirectly - it is allowed to appear in the proofs of propositions and in types, because these are erased, but a direct call to the choice function is invalid because there is no actual code that could implement it.

This means that one has to be a little careful around axioms in the associated realizability model. I would like to say that we just exclude terms containing choice from the model (they are just in a different, extended language which has no realizability semantics), but it's not that simple. There is no choice realizer, but as long as you never need to request such a realizer while constructing the realizer for a term you should be safe, and the term "computes".

Loading

@andres-erbsen
Copy link
Contributor

@andres-erbsen andres-erbsen commented Oct 26, 2019

just throw the term at the type-checker, if it types normalize it, and inspect the result when hereditarily positive

knowing something normalizes "eventually" for some value of "eventually" longer than my lifetime seems not so useful

@ppedrot What do you think of use of Coq in ways that essentially rely on not normalizing the objects under study for performance reasons? Is that as dissatisfying as breaking normalization through axioms?

Loading

@Matafou
Copy link
Contributor

@Matafou Matafou commented Nov 4, 2019

More important properties to me are progress, strong normalization, subject reduction and decidability of type-checking. With that you don't have to trust whatever crazy meta-theory you live in, just throw the term at the type-checker, if it types normalize it, and inspect the result when hereditarily positive.

@ppedrot I sympathize with your need for simplicity (mainly because of my inability to understand complex ones I must admit :-)), but what do you mean by "inspect" here? What is this activity you need to keep simple? I think it is important to make this point explicit.

I certainly see why consistency, subject reduction and decidability of type checking are desirable properties. But I would be quite happy with a system where well typed terms are only weakly normalizing for instance. I mean "compute" is never useful except on ground terms (where we can apply some normalizing strategy).

Likewise I would be ok with a system where several (but convertible) normal forms exist for one single well typed term. That would make some kind of quotients by the way.

Loading

@XVilka
Copy link
Author

@XVilka XVilka commented Jan 8, 2020

Seems there is a new article about this by @artagnon: Lean versus Coq: The cultural chasm.

Loading

@Zimmi48
Copy link
Member

@Zimmi48 Zimmi48 commented Jan 8, 2020

Note that this corresponds to one of the post in the long (and ongoing) discussion on Coq-Club: https://sympa.inria.fr/sympa/arc/coq-club/2020-01/msg00006.html

Loading

@ejgallego
Copy link
Member

@ejgallego ejgallego commented Jan 8, 2020

Loading

@ppedrot
Copy link
Member

@ppedrot ppedrot commented Jan 28, 2020

I think that following @SkySkimmer's Lean-to-Coq translator we can close this bug report for good...

Loading

@Matafou
Copy link
Contributor

@Matafou Matafou commented Jan 29, 2020

Even if this is very interesting I don't think the point has ever been to know if coq can check lean's definitions. I mean dedukti is supposed to import coq definitions and check them, but coq users did not switch to it (yet). So there is maybe a bit more to say than that.
But it may not be the place so closing the issue is of course ok for me.

Loading

@ppedrot
Copy link
Member

@ppedrot ppedrot commented Jan 29, 2020

@SkySkimmer's point is that you can already use private inductive types to get Lean-style quotients. Private inductive types are broken by design, but so are naive quotients in type theory, so both solutions play in the same league.

Loading

@Matafou
Copy link
Contributor

@Matafou Matafou commented Jan 29, 2020

I got the point, thanks. But this is not very positive:-).

Is there somewhere an explanation of why both are broken by design and what we should expect from a good solution? Something more than "we want SR, decidabilty etc", to which we already heard the answer: "we don't care about all that in practice since our VM avoids the problem".

Loading

@XVilka
Copy link
Author

@XVilka XVilka commented Jul 3, 2020

Just for everyone who is not following the git history closely - UIP in SProp PR was merged in the master, yay!

Watching the #10764 is the next step.

Loading

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet