@Bas: can you elaborate a bit on the Lean vs Coq ranking? it is very close but still I would be interested to learn what reasoning is behind that order.
Totally notwithstanding the fact that Lean is written in C++ and just that makes it more suspicious...
Bas can confirm, but in my mind Gaëtan's experiment showed that Coq's foundations are more flexible than Lean's, and template polymorphism bugs may not help either... so non-type-theorists may have to implicitly trust more stuff.
As a type theorist, I personally consider the foundations of Lean to be broken by default...
At least in Coq you have to opt-in to wreak havoc with the type system
Lean has multiple independent checkers, including one in Haskell, so I recommend ignoring the C++ part.
and if you don't buy Lean foundations or consistency proofs, you're also questioning Mizar (that uses a set theory stronger than ZFC) and probably many of the other systems (does even HOL survive?)
Pierre-Marie has repeatedly expressed specific concerns about some properties that Lean developers do not even try to have, but I believe that soundness is not one of them.
I think they don't have subject reduction which is a property that is typically used to prove soundness.
Karl Palmskog said:
Bas can confirm, but in my mind Gaëtan's experiment showed that Coq's foundations are more flexible than Lean's, and template polymorphism bugs may not help either... so non-type-theorists may have to implicitly trust more stuff.
This kind of argument is, in my opinion, quite fallacious. If you take the HOL family for example, the kernel does not even understand "nat := S nat | Z". Of course the kernel is smaller, but you also have to trust some other code that does the encoding. IMO the fact that you can encode A in B, does not mean that B is more trustworthy, nor less. It just makes it harder to define where the TCB ends. Especially given that the encodings are often "extra logical" hence they don't even help relating the power of the logics (for example the encoding HOL in Coq needs axioms).
I guess making people conscious that there is no miracle, all tools can be buggy, and some a more rigorous that others, is important. But ranking tools is too hard to me.
Little anegdote: the type checker of Elpi has been incredibly useful in the development of HB. I know for sure (since I wrote it) that it is a hack, unsound and incomplete (I'm so ashamed that I'll rewrite it, at some point...). Morale: even the verification you do with a broken tool can be useful ;-)
Pierre-Marie has repeatedly expressed specific concerns about some properties that Lean developers do not even try to have, but I believe that soundness is not one of them.
An interesting formulation ;-)
I wonder how system like Dedukti change this trust landscape. Since an automated translation from one logic to another would not be trustworthy, one would have to manually check the top level specs in the translated form. I can imagine that this is simpler if the source logic is more expressive because the specifications would allow a more natural encoding of specifications.
I know issues about Lean have been raised, but I’m unconvinced that impredicative type theory has a better story, and I’m unsure if “impredicative Prop” is predicative enough for this discussion.
AFAIK, outside of predicative type theory, the normal way to prove soundness and consistency relies on a stronger metatheory — which is what happens for Lean and non-type theoretic systems — including HOL, I expect?
Now, impredicative type theories (up to Coq with impredicative Set) does that too — consistency relies on strong normalization which relies on an impredicative metatheory!
The only arguably-better story which I’ve seen is predicative type theory with Martin-Löf’s meaning explanation. You might be able to model proof-irrelevant impredicative Prop with a boolean model in a predicative metatheory — but does that scale to full Coq with cumulativity etc.? Otherwise, I’m not sure why I should trust Coq more than Lean.
As we all know, due to @Gaëtan Gilbert , coq is a great type checker for lean.
@Paolo G. Giarrusso my personal problem with Lean is not predicativity vs. impredicativity. I am fairly confident about impredicativity actually, and never quite understood why the Swedish school of constructive mathematics were that traumatized by it. Heck, I actually have no problem with ZFC + whatever big cardinal assumption as far as soundness goes.
No, my grudge against Lean is that you can't really trust the computational content of the proofs because of the lack of decidable type-checking and the subsequent loss of subject reduction. You pick a term, you compute with it, and it might happen that the result doesn't even type-check.
Pierre-Marie Pédrot said:
Paolo G. Giarrusso my personal problem with Lean is not predicativity vs. impredicativity. I am fairly confident about impredicativity actually, and never quite understood why the Swedish school of constructive mathematics were that traumatized by it.
Because at some point they proposed an impredicative system that was broken ;-)
Fair enough, but then this is entering a Freudian territory.
Indeed
LISP got static binding wrong, that's not a reason to reject higher-order functions on those grounds...
I think it's still an open problem whether System F can be proved SN predicatively.
As far as my personal beliefs go, I have no doubt that System F is SN...
I agree, I was mostly joking
@Pierre-Marie Pédrot If you use the "idealized" non-decidable conversion rule that lean is theoretically using, won't you recover subject reduction ? Or is there another obstruction beside the impossibility in general to re-typecheck an evaluated term ?
@Kenji Maillard yes, you can pick your poison: non decidable type-checking or no SR.
Well, I'm usually fine with having a theoretical SR
but this claim that there is the idealized system and the actual implementation is nonsense to me
I mean I won't try to retypecheck an evaluated term because it will often be impractical, no ?
@Kenji Maillard then you also break computation in some sense
it's not necessarily reduction, it pervades into conversion actually
this brings in the same kind of issues that ETT has
i.e. you can apply computational equations to the term but there is no reason that they are equi-typable
hm, you mean if and you don't necessarily have ?
so in practice you made the whole computational content of the theory highly dubious
@Kenji Maillard yep
It wold be worse if you had A ok, A = B, B ko ;-)
as in PHP
I'm still a bit puzzled: Isn't it just that the typechecking algoritm and the conversion procedure are not complete with respect to their specs ?
Yeah, won’t you get B okay using the “idealized system”? (I don’t care for decidable complete typechecking either)
Kenji Maillard said:
I'm still a bit puzzled: Isn't it just that the typechecking algoritm and the conversion procedure are not complete with respect to their specs ?
I guess that is more like "they can't be"
Well I do think that being complete is an important property (also in practice): when your program does not typecheck you know that the problem is either in the program itself or it's a bug of the typechecker (possibly a performance bug)
Completeness defines the type theory
I really don't understand the claim that Lean as implemented is an approximation of the idealized theory.
It's just not true, they are two different theories.
I guess that is more like "they can't be"
Fair enough :)
Completeness defines the type theory
Do you mean that there are cases where holds but the lean typechecker will say instead of saying that it is unable to answer in finite time ?
Have you heard about the halting problem?
I think someone told me about these kind of horror stories when I was a kid ^^
but I still don't see how it applies here (sorry to be so slow; or maybe just ignorant).
If you have a program that you try to typecheck for an intrisically semi-decidable type system you could have 3 kinds of outputs:
(I'm assuming that you want to write a total typechecker of course)
I don't understand how having a tri-valued function helps...
I'm trying to understand what you mean by "completeness defines the type theory": this tri-valued function does not define any type theory but it would still validate the spec of the undecidable type theory.
I don't understand how you define the "practical" type theory
you can always semi-decide that something is in the idealized type theory, I agree
but the stuff that says I don't know you never really know
Is there such a thing as a "practical" type theory in that case ?
Or rather where is that essentially needed when we work with type theory ?
Pierre-Marie Pédrot said:
but the stuff that says I don't know you never really know
Happens :shrug:
I think this is entering the realm of philosophy
please no, I meant on a practical ground
In particular with respect to your earlier claim that computation was broken in some sense
My claim is the following: in the "practical" type theory whatever it means, conversion incurs arbitrary needs of fuel
100% Agreed
So this means that you cannot really be sure about the validity of conversion
so by reducing stuff you actually lose some bits of confidence
You cannot be sure of the absence of conversion: is that used anywhere in type theory ?
I don't think so, type theory should be monotonic w.r.t. conversion
except that it's not clear with strict proposition
Pierre-Marie Pédrot said:
except that it's not clear with strict proposition
how so ?
I am not sure
But the root of the undecidability is the fact that you can eliminate SProp Acc into Type
In ETT, you're not monotonic because you can invoke conversions that have logical contents not present in the term
For Lean, I don't know
I knew somebody would bring up “decidable typechecking” as a practical problem, as if Coq commands were terminating in feasible time; and I don’t talk of Ltac loops, just of typeclass search and unification.
ETT looks monotonic AFAICT
@Paolo G. Giarrusso In practice both Coq and Lean have similar practical limitations in their implementations. That's why I would like to understand better the theoretical consequences of a type theory where typechecking is only semi-decidable and not decidable.
I could pass proofs from Coq to Matita and see the inability to type check them as a bug in Matita (or more interestingly a soundness bug in Coq)? If I try to write a third party checker for a tool based on an undecidable type theory, what can I really conclude from a failt to check?
I assume Lean’s existing third-party checkers must use the same algorithm, and if a proof is rejected, you must fix it (like with Coq).
As far as I know the importer from Lean to Coq turns conversion-checking off, right @Gaëtan Gilbert ?
To me, the problem with undecidable type systems is how to formally test whether they’re “good enough” — clearly it’d be bad if the typechecker always looped, but that’s not what happens.
Paolo G. Giarrusso said:
I assume Lean’s existing third-party checkers must use the same algorithm, and if a proof is rejected, you must fix it (like with Coq).
"third party" and "same algorithm" in the same sentence is funny ;-)
Hm, same “actual” type system, at least.
(we had for a long time an "independent checker" using "the same code" in Coq, so no pun intended)
Has that changed now?
no, but at least it is now documented as such I believe (CC @Maxime Dénès ), no claims of independence
In Lean’s case, I guess you could formalize a translation from the “ideal” type system to the “actual” one (that adds more “annotations” of some sort), and inspect the translation (since it’s something the user might have to do on occasion)
Today an independent checker would be Dedukti, there everything is very different!
Dedukti + CoqInE if/when it’s completed, right?
yes something like that. But my point was the independence, not the actual state of the tool
Sure. On such a translation for Lean, I haven’t actually seen one for Lean, but it could probably be extracted from Carneiro’s master thesis with some work.
Enrico Tassi said:
As far as I know the importer from Lean to Coq turns conversion-checking off, right Gaëtan Gilbert ?
Not in general. It's pretty slow so turning conversion checking off is useful eg to see if it will bug on terms which depend on terms that don't reasonably-terminate, but it's meant to succeed even when checking conversion.
you could formalize a translation from the “ideal” type system to the “actual” one
This is the way to justify ETT into ITT + UIP + funext, but there is something I judge wrong with that that I can't pinpoint
because this translation is not compatible with reduction in the sense that if you reduce a term you don't necessarily know how to retrieve a correctness derivation for the reduced term
I think that Théo Winterhalter's PhD has more formal complaints about this
so once again, if you only care about soundness this is moot
Last updated: Sep 25 2023 at 12:01 UTC