why should I choose lean over coq? why should I choose coq over lean? why should I choose isabelle over coq? there are several major theorem provers out there and I happen to be decently fluent in lean. can you tell me why people pick one theorem prover over another?
and it seems that there's a lot of cross pollination among theorem provers right
If you want a technical comparison of provers this is the right place.
But the way you phrase the question is too open ended, IMO. Why do people chose Windows, Linux or OSX? Why do people chose beer or wine? I'm not kidding, it's really the same: it depends what your friends do, what the context in which you are makes more approachable. So I'm not so sure there is a precise answer to your question.
ah so... I want a technical comparison then because I don't have much experience with other theorem provers
Help me find the right focus by telling me what you want to do. Does your domain of interest require automation? Do you need libraries (is your subject about lists and numbers or you need to rings, fields, ...)? Do you care about the bus factor (is it a research experience, or it has to last a decade)?
I like to program in ocaml for example, I'm fluent, but I would look elsewhere if I had to do system programming.
If you just want a good pick for beginners, then I would pick the one my neighbor is using, so that I can learn from him.
Do you care about the bus factor (is it a research experience, or it has to last a decade)?
And do you care about compatibility across releases?
And do you care about compatibility across releases?
For the time being, no.
Does your domain of interest require automation?
"require"? Well I certainly want some automation to make the proving process less laborious...
Do you need libraries (is your subject about lists and numbers or you need to rings, fields, ...)?
I want lists, numbers, inductive predicates and inductive data types. And maybe quotients. I don't need to do proper mathematics.
Do you care about the bus factor (is it a research experience, or it has to last a decade)?
The proofs don't need to last a decade. They can be irretrievably broken after a week.
Help me find the right focus by telling me what you want to do.
I want to formalize solutions to competitive programming problems on various platforms (CSES, AtCoder, etc.) The solutions need to be executable. I plan to write the solutions in WebAsssembly and prove properties about them in a theorem proving system. I think WASM is a great language to model imperative semantics. It has far less abstractions than C and C++ and because of that, it is easier to reason about WASM programs in automated theorem proving systems. Functional programming isn't good enough for competitive programming because we use a lot of low-level tricks to optimize our programs, including memory reuse, cache friendliness optimizations, recursion avoidance and other techniques.
Alright so it seems that this question is being ignored because I'm not being specific enough. Here is a concrete question then: how does Coq's automation stack up against Lean?
@Huỳnh Trần Khanh I am not sure that Lean is the contender for the kind of stuff you want to do.
I'd rather look for Isabelle and/or HOL systems.
There is a reason why everybody praises sledgehammer.
That's a rather... unexpected response. Sledgehammer is really that good? What does it do?
Solve all the boring stuff you don't want to do by hand.
TBH, I don't see a big difference between Lean and Coq. They have a pretty similar design, use more or less the same foundation and the differences are probably not very relevant when compared to the difficulties one would face in all generalities.
The only advantage I concede to Lean is the tactic language. It's hard to do worse than Ltac.
But Lean has a lot of issues as well, in particular the fact it's not a single proof assistant. There are a lot of people stuck on Lean3 and a promise that Lean4 will wash whiter.
It's a bit like comparing Java and C#...
A relevant difference between Lean and Coq as of today is the work you can build on. For instance, since you mentioned WASM: for Coq there exists a WASM development (https://github.com/WasmCert/WasmCert-Coq), but not for Lean. Similarly, if your work is built on perfectoid spaces, you're likely better off using Lean.
@Yannick Forster why can't I help feeling a vague sense of trolling in your comment?
Pierre-Marie Pédrot said:
Yannick Forster why can't I help feeling a vague sense of trolling in your comment?
What do you mean? Why "trolling"? Sorry I'm new here.
That the community that needs perfectoid spaces in their daily life can be counted on one hand?
You are right. In fact the perfectoid spaces formalization is intended as a proof of concept—there is no math (in mathlib) that depends on it.
So... :joy:
No trolling, seriously! I chose perfectoid spaces as an example for an advanced topic. If you are interested in formalising andvanced math based on an advanced topic, you'll be happy to have some previous work there
Well, the amount of non-formalized math that depends on them is probably of null measure so...
I tried to make a general point: If you need a certified Standard ML compiler, better use HOL4 and CakeML. If you need the four color theorem, use Coq. If you need a good separation logic framework, use Coq with Iris. Sometimes, the concrete features of a proof assistant might matter less than the years of work others have done based on the (often suboptimal) features
Pierre-Marie Pédrot said:
But Lean has a lot of issues as well, in particular the fact it's not a single proof assistant. There are a lot of people stuck on Lean3 and a promise that Lean4 will wash whiter.
Well, I'm a bit confused by your comment. Isn't Lean 3 the latest stable release and Lean 4 is not yet released? And I don't know what you mean by "wash whiter".
I think Lean4 was released last year
Pierre-Marie Pédrot said:
The only advantage I concede to Lean is the tactic language. It's hard to do worse than Ltac.
Your way to compare languages is quite funny @Pierre-Marie Pédrot , IMO you should look at what people managed to write in them as the first criteria. And even if Ltac is, well, Ltac... people did wonders with it ;-) Good \LaTeX to everybody (troll).
Lean's trust story may be better than Coq. Have they had even a single proof of False yet?
Lean? Not yet.
Lean's type theory is equiconsistent to ZFC with a large cardinal assumption. I have a very good reason to believe ZFC is inconsistent, so a proof of false is just a matter of when not if. /s
@Huỳnh Trần Khanh that's the official propaganda of the party, CIC enjoys the same "theorem". The sad reality is that 1. by far most soundness bugs are due to implementation issues, not theoretical issues and 2. the theory actually implemented is not the one described in those papers.
In particular in the case of Lean there is a huge mismatch between the implementation and the theory, because Lean's underlying theory has undecidable type checking.
@Gaëtan Gilbert maybe we try a bit more in Coq...
@Pierre-Marie Pédrot I hear Ltac2 is a bit improvement over Ltac ...
@Mario Carneiro might want to contribute here...
@Andrej Bauer said in his ECM talk that if you were a mathematician interested in formalisation then he recommended starting with Lean.
Years ago I challenged the mathematics and computer science communities to begin formalising the kind of mathematics which wins all the Fields Medals, not just those in one of the corners. The odd order theorem proved we were "in" -- how many more can we get? What has become clear to me now is that the idea of having one coherent monolithic axiom-free mathematical library like the leanprover-community's mathlib
, has got far more advantages than disadvantages when it comes to goals like this. Beyond some point, abstract classical pure mathematics becomes an intricately woven object, with algebra and topology and analysis all interplaying with each other (for example via number theory), and it needs to be built as a coherent whole in order to thrive. One advantage of the monolithic approach is that it gives users a "one size fits all" answer leanproject
to questions like "I'm a professional algebraist/topologist/analyst/number theorist/geometer interested in formalising an API for arbitrary maths subject X, as far as I know my subject could well be classical, noncomputable and rely on all three of Lean's inbuilt axioms and furthermore I have no understanding of what that means, oh and by the way I will need to have access to an API for maths subjects A B and C, where do I start?".
I am in no position to say anything about what is the best prover for any maths/CS topic other than Fields-Medally maths. I am well aware that I have said dumb things in the past, and I've apologised for them too. I believe that this new area of formalising Fields-Medally maths could greatly benefit from attempts from groups using other systems (for example Coq, Isabelle/HOL) to emulate part or all of what we have done in Lean in a way which new users can somehow be easily led into. I recommend practicing a lot on maths undergraduates -- that's what I did. I think that it would greatly benefit this new area of formalising Fields-Medally maths if the problem were attacked by provers in the other communities.
axiom-free mathematical library
:rolling_eyes:
:-) We just use Lean's axioms, we don't add any new ones.
src/
is axiom
-free.
I guess axiom-free means here "don't pay attention to the axioms, use them as freely as an average mathematician in an average math-department would do". I.e., no problem at all with choice or LEM. My feeling on this is: if I can write a proof with or without choice, but avoiding choice (or LEM) makes it 5% harder to understand, then I'll go for choice. But I know very well that other people feel differently about this (and I won't try to convince them, contrary to Kevin :-)
I guess it's a matter of phrasing.
We know your feelings Pierre-Marie :)
Pierre-Marie Pédrot said:
I guess it's a matter of phrasing.
We should add a new syntax to Coq, a synonym of Axiom, let's say Principle
and all live happily ever after :-)
Pierre-Marie Pédrot said:
Huỳnh Trần Khanh that's the official propaganda of the party, CIC enjoys the same "theorem". The sad reality is that 1. by far most soundness bugs are due to implementation issues, not theoretical issues and 2. the theory actually implemented is not the one described in those papers.
The theory implemented by lean is a subset of the rules described in the paper. I was very careful to ensure this, because otherwise a soundness claim would be inappropriate as you say. I would very much like to extend the proof to cover Coq as well but there are a number of complicating factors and quite frankly I have no idea how to do it. (In particular, I don't know how to salvage unique typing, and the interaction of fixed and parametric universes seems... complicated; plus there are inductive and coinductive types and non-uniformly parametric inductives, all of which lean doesn't have to deal with.) As for undecidable type checking, that doesn't matter at all, because the soundness proof does not rely on strong normalization.
However (1) is certainly a concern. The main defense against this in lean is the external typecheckers, which can at least "dynamically" verify the correctness of proof deductions performed by the lean kernel. We run two external typecheckers on mathlib as part of CI, so that's a fairly good argument that there are not implementation bugs unless those bugs are shared by the external typecheckers (which is of course a possibility). But lean 3 has a spotless record in this regard: there has never been a soundness bug reported against the implementation.
Is it known that any soundness proof of Coq must depend on strong normalization? Or is the "obvious model" with types as sets sufficient? It is tricky to get an enumeration of all the features and ensure that they all respect the model. (As far as I know, the nearest equivalent to my paper in the Coq world is Coq Coq correct, but this is "just" a formalized typechecker, not a soundness proof, and it also doesn't cover some Coq features that any external typechecker would have to concern itself with to handle real world uses, like the module system
@Mario Carneiro modules could be left out of the kernel, in principle — they aren't in Coq, but it's not clear (EDIT: to me!) why. (see corrections below)
The "obvious model" isn't that obvious to work out for full Coq, but http://arxiv.org/abs/1710.03912 is the current last word on it.
modules could be left out of the kernel, in principle
How would that work? Keep in mind we have functors
sorry, did I speak too quickly? Do you want soundness of uninstantiated functors? I _thought_ some of you complained that modules don't belong in the kernel? If not, forget it.
on the model, it's unclear that it doesn't rely on normalization... there's a nontrivial discussion, but IIUC that's true once you translate recursive functions to eliminators.
First, the guard condition is probably the murkiest part of the Coq kernel.
I conjecture that there are some recursive functions accepted by Coq that are not expressible via recursors.
(we had a chat about that here some time ago)
Second, fully instantiated functors are in theory expansible into plain terms, but the devil is in the detail.
To prove this you'd need to take into account parameterized modules (i.e. functors) via some kind of logical relation, and then you have to think about the rules of the system.
Mike Shulman is known to have abused the functor system and universe-polymorphic definitions to emulate Type_ω
@Mario Carneiro unfortunately taking subset of a system is not going to preserve a lot of good properties of that system.
It's trivial to break SR by a subset of the rules (and this is what happens in Lean)
I personally find the claim that Lean is a single system on the verge of intellectual dishonesty. It's similar to claiming that ITT and ETT are the same thing. The "ideal" system is very nice but is undecidable, when the "real" system is ridden with technical issues.
These two things can't be called "Lean" at the same time.
(note that we're not merely talking about nice stuff to have, but what I consider foundational issues. The metacoq folks are heavily relying on properties such as SR to get anything done.)
Mike Shulman is known to have abused the functor system and universe-polymorphic definitions to emulate Type_ω
Do you have any more details on this? Are you referring to this blogpost?
To me, the clearest difference between @Pierre-Marie Pédrot and @Mario Carneiro 's position is philosophical — Pierre has conceptual objections to proving logical consistency using models into ZFC, and they have their importance; but if you accept that approach, subject reduction becomes much less important, so letting the algorithmic typechecker be incomplete becomes legitimate.
IIRC, that's also what happens with the SR violations in Coq with coinductives?
@Ali Caglayan I think this is indeed the technique referred to in this post.
Pierre-Marie Pédrot said:
Mario Carneiro unfortunately taking subset of a system is not going to preserve a lot of good properties of that system.
You misunderstand. The lean system is a subset of the theory in the paper, not the other way around. In particular, this is why typechecking for the theory on paper is undecidable, while typechecking for the theory lean implements is decidable (obviously, since lean decides it by definition). It's true that adding additional rules to a system can break some properties (like decidability) or add new ones (like transitivity), but the one I care about is soundness, and if adding rules to a formal system results in a formal system that can't prove false, then the original system can't either.
Actually, looking again it seems you are interested primarily in SR. I was very upfront about these tradeoffs in my paper. Lean doesn't have all the nice things, it's true. I'm a lowly type theorist with no real power to change the foundations, only to document what they are
But if you ask me whether I should prefer a system that lacks SR or a system for which soundness is an open question, all other things being equal, the choice is obvious
No, no, I understand that Lean the program is a subset of Ideal Lean.
But if soundness is important, it's hardly the most important thing for a type theory, and this is probably where we disagree.
Contrarily to many syntactic properties like SR and normalization, soundness is closed under subsystem, which is why it's fine to only implement a decidable Lean.
Pierre-Marie Pédrot said:
I personally find the claim that Lean is a single system on the verge of intellectual dishonesty. It's similar to claiming that ITT and ETT are the same thing. The "ideal" system is very nice but is undecidable, when the "real" system is ridden with technical issues.
Interesting that you mention this. My personal opinion is that lean should just embrace undecidability and go full ETT. Defeq causes so many problems in practice, I just want to be able to reason classically about equality and not worry about this maimed version of equality
This has a lot of practical drawbacks though.
This means you have to write your programs in a defensive stance.
You'll never be sure than merely applying a conversion rule is not going to make the type-checker go crazy.
Also, even extensional equality is not what you want in general.
Overuse of defeq is a code smell for me. Just write a lemma instead of applying a conversion rule
you never have to worry about the kernel going crazy if you just say what the proof is
Well, yes, if you embrace undecidability the kernel can loop.
defeq can cause timeouts anyway
you'll have to write explicit conversion steps everywhere and pray for the absence of coherence hell.
this is true, and it's often an argument for "ETT is better in practice"
as long as the types stay simple you don't have to worry about coherence hell
e.g. "What if the conversion asks for Ack(5,5) = blah"
But what's the point of using dependent type theory then?
heh
the irony
If I have to stick to HOL, I'd rather use Isabelle than Coq.
I usually say that dependent types make the impossible merely hellish
they should be used sparingly
I think the mathcomp people are precisely the best supporters out there for dependent types
still better than not having them at all
despite doing first order maths
everything you can turn into a computation is a godbless
In lean/mathlib we take a tactic-first approach to that, in part because the lean kernel isn't as optimized for conversion-heavy calculations
this is another interesting point btw
I don't understand why this happens
I'd be curious to learn why
plus lean has external typecheckers, are you going to ask them all to be super optimized for conversions too?
why not?
I mean, the mathcomp approach has proven that it was scalable
because that's a lot to ask for. It increases the TCB
well, then you prove your checker.
At some point the TCB argument becomes ironical when implementing a proof assistant.
hopefully in a simpler system...
or not. I don't want to have to design another system and suffer writing FOL in it.
okay then it's still in the TCB
not quite.
If you go that way, Maxwell equations are part of the TCB.
of course they are
:shrug:
by proving a proof assistant in itself you still have enhanced the trustability of the code by several orders of magnitude
and in any case at some point you have to bite the bullet of Gödel's incompleteness theorem
I don't disagree with you at all, that's the topic of my upcoming PhD thesis. But I would also want a self verified system to be as philosophically parsimonious as it can reasonably be
Pierre-Marie Pédrot said:
But what's the point of using dependent type theory then?
universal algebra? :grinning_face_with_smiling_eyes:
IMO "TCB" is a myth invented to write introductions to general-audience papers on theorem proving... It has so little pertinence to the everyday use of proof assistants as to be laughable.
My thought capacity has been permanently maimed by the Internet, and I cannot help thinking that the "will you fight, or will you perish like a dog meme" is curiously fitting this conversation. Should we embrace meta-self-objectification?
Interesting discussion :-) Probably irrelevant to most first-time users though! I think TCB has quite some importance in the case of Coq, where the kernel is quite big and complex and where we repeatedly found bugs. Admittedly, we found nothing unfixable or having a huge impact on users (to date), except maybe all the subtle template-polymorphism bugs that led us to restrict the feature. Compared to the external checker of Lean or an LCF HOL-like kernel, the Coq kernel is qualitatively different: it really does not read like a spec.
The other point where TCB has importance is when we talk to certification agencies. They need some slightly tangible measure of "trust"
But hopefully we won’t have to trust the Coq kernel anymore, only MetaCoq! And this will definitely be for the best. Especially because the need to keep a somewhat reasonable kernel to trust forces us to rely on a heavy elaboration layer, when some stuff could be kept as is rather than elaborated if we had a larger kernel.
Mario Carneiro said:
Is it known that any soundness proof of Coq must depend on strong normalization? Or is the "obvious model" with types as sets sufficient? It is tricky to get an enumeration of all the features and ensure that they all respect the model. (As far as I know, the nearest equivalent to my paper in the Coq world is Coq Coq correct, but this is "just" a formalized typechecker, not a soundness proof, and it also doesn't cover some Coq features that any external typechecker would have to concern itself with to handle real world uses, like the module system
That's a good question. The gap can be found as the difference between Coq in Coq's formalism (which has a ZF relative soundness proof) and MetaCoq's PCUIC Calculus. In short, universes are easy, parametric inductive, families and even non-uniform parameters are handled in the model already. Mutual inductives are easy, but nested ones are not handled AFAIK (and we suspect with Pierre-Marie that it says something "interesting"). Coq in Coq is also using a sized-types discipline that is slightly different although not entirely unrelated to guard checking. Bridging this gap will probably require a PhD. And the module system is not considered (we could handle functor-free modules quite easily though, I suppose, but maybe a reworking that elaborates modules to more expressive record types would be better and factorize the problem), nor "fast" bytecode and native-compute conversion algorithms, nor template-polymorphism.
Matthieu Sozeau said:
Mario Carneiro said:
Is it known that any soundness proof of Coq must depend on strong normalization? Or is the "obvious model" with types as sets sufficient? It is tricky to get an enumeration of all the features and ensure that they all respect the model. (As far as I know, the nearest equivalent to my paper in the Coq world is Coq Coq correct, but this is "just" a formalized typechecker, not a soundness proof, and it also doesn't cover some Coq features that any external typechecker would have to concern itself with to handle real world uses, like the module system
That's a good question. The gap can be found as the difference between Coq in Coq's formalism (which has a ZF relative soundness proof) and MetaCoq's PCUIC Calculus. In short, universes are easy, parametric inductive, families and even non-uniform parameters are handled in the model already. Mutual inductives are easy, but nested ones are not handled AFAIK (and we suspect with Pierre-Marie that it says something "interesting"). Coq in Coq is also using a sized-types discipline that is slightly different although not entirely unrelated to guard checking. Bridging this gap will probably require a PhD. And the module system is not considered (we could handle functor-free modules quite easily though, I suppose, but maybe a reworking that elaborates modules to more expressive record types would be better and factorize the problem), nor "fast" bytecode and native-compute conversion algorithms, nor template-polymorphism.
Hi @Matthieu Sozeau, I hope you are well! Maybe this is a chance for me to ask a question. Coq-as-implemented has a number of eta laws (for instance, eta laws for function types and for some record types). Are both of these accounted for in the metacoq formalization? I had a look at MetaCoq.Checker.Typing
and I couldn't find it.
@Meven Lennon-Bertrand this one is for you I think
Hi Jonathan, I'm fine! Indeed, they are not part of it today, but we're working on it with @Meven Lennon-Bertrand and @nicolas tabareau We think we can get eta for function types, non-empty record types and SProp's strict equality as well, even in this purely untyped setting.
Great to hear that! This has been one of my concerns about the MetaCoq direction, since I have felt that Coq aspires to implement a style of type theory for which these untyped techniques tend to be not so well-adapted. (The fact that there seems to be little hope for the record type without any fields is indicative of this.) But I would be very happy if you get the non-empty records working! Thanks :)
I also believe that to go beyond that, one needs to put some typing information in the syntax, or rely on retyping during conversion (as Agda does, AFAIK)
In another direction we think we might be able to show rather straightforwardly that the theory with judgmental equality rules is equivalent to the current untyped version, that would be reassuring w.r.t. all the nice work done on categorical models ;-)
@Jonathan Sterling The interesting thing is that although the theories as usually described on paper are quite different, the actual algorithms as implemented respectively in Agda and Coq are much more similar, as far as I understood. Terms are wh-reduced then compared, and the only difference for η on functions is the way it is fired at that stage: Agda (and NbE as well, I guess) use it systematically when comparing terms of function types, while Coq only fires it when comparing a neutral with a λ-abstraction, while in case there are two λ against each other it directly uses congruence (which is the same as η-expansion followed by β-reduction), and otherwise compares the neutrals directly without η-expanding them.
So my hope is that one could prove the undirected systems equivalent to algorithmic/bidirectional ones, and then prove the equivalence between the algorithmic systems. But indeed we are hitting real hard into the pains of rewriting.
Dang, I got to this thread late... I'd sure love a thread where we sketch the proofs of these things!
Matthieu Sozeau said:
In another direction we think we might be able to show rather straightforwardly that the theory with judgmental equality rules is equivalent to the current untyped version, that would be reassuring w.r.t. all the nice work done on categorical models ;-)
Yes, to me the second idea sounds like a good one --- then (in principle) one can use a better-adapted method such as nbe (or the mathematical versions thereof) to establish the metatheory of Coq. It seems like if one wants Coq to have such laws, then it may be a mistake to formulate Coq's metatheory in terms of rewriting. Furthermore, it would be nice to add the eta law for the unit type, which is not problematic in the setting of nbe.
The thing is, we are kind of starting from the implementation here, so we have to go though the rewriting metatheory at some point, ensuring confluence/transitivity and show that it models a congruence relation which is otherwise part of the judgmental equality specification. But yeah in the future we might just get rid of untyped equality and switch to an nbe algorithm, if we can get that to scale. One thing that's worrying with the eta rules for e.g. records is that blind expansion (nbe style) could bring terrible performance issues in libraries like math-comp where records are everywhere in unification and conversion problems.
From a performance point of view, unfolding of δ-δ conversion problems is critical. I think that if one were to use NbE for that, even without considering records, there would be an unbearable efficiency hit, because NbE internalizes δ rules so much that they are transparent. It's fine to use NbE for the spec though.
Matthieu Sozeau said:
The thing is, we are kind of starting from the implementation here, so we have to go though the rewriting metatheory at some point, ensuring confluence/transitivity and show that it models a congruence relation which is otherwise part of the judgmental equality specification. But yeah in the future we might just get rid of untyped equality and switch to an nbe algorithm, if we can get that to scale. One thing that's worrying with the eta rules for e.g. records is that blind expansion (nbe style) could bring terrible performance issues in libraries like math-comp where records are everywhere in unification and conversion problems.
Indeed... It is one thing to answer these questions in theory, but it is quite another to try and get those answers to scale to something like Coq. The issue with records is very true --- for instance, Agda has this no-eta
flag that you can set on records which some users have needed in order to maintain acceptable performance for certain use-cases.
Pierre-Marie Pédrot said:
From a performance point of view, unfolding of δ-δ conversion problems is critical. I think that if one were to use NbE for that, even without considering records, there would be an unbearable efficiency hit, because NbE internalizes δ rules so much that they are transparent. It's fine to use NbE for the spec though.
Oh and btw, I am not proposing that you should use an nbe algorithm --- there are many aspects of nbe that are way too naive for a serious implementation. But there are aspects of the technique that could likely be applied, and it seems like "type directed conversion" (with some optimizations) is a good direction to go, if Coq does indeed want to have all the extensionality laws that it currently implements.
And yeah, I also want to distinguish between the implementation and the spec. There is always a greater distance between these things than people think, and to me that is OK.
Jonathan Sterling said:
Oh and btw, I am not proposing that you should use an nbe algorithm --- there are many aspects of nbe that are way too naive for a serious implementation. But there are aspects of the technique that could likely be applied, and it seems like "type directed conversion" (with some optimizations) is a good direction to go, if Coq does indeed want to have all the extensionality laws that it currently implements.
Coq already implements some cheap bits of type directed conversion though. For instance the implementation of SProp using relevance marks is basically a way to transport a small bit of information ("is this term in a type in SProp?") that is enough to trigger extensionality rules without carrying the whole typing information.
Meven Lennon-Bertrand said:
Jonathan Sterling said:
Oh and btw, I am not proposing that you should use an nbe algorithm --- there are many aspects of nbe that are way too naive for a serious implementation. But there are aspects of the technique that could likely be applied, and it seems like "type directed conversion" (with some optimizations) is a good direction to go, if Coq does indeed want to have all the extensionality laws that it currently implements.
Coq already implements some cheap bits of type directed conversion though. For instance the implementation of SProp using relevance marks is basically a way to transport a small bit of information ("is this term in a type in SProp?") that is enough to trigger extensionality rules without carrying the whole typing information.
Indeed, as it must!
Pierre-Marie Pédrot said:
From a performance point of view, unfolding of δ-δ conversion problems is critical. I think that if one were to use NbE for that, even without considering records, there would be an unbearable efficiency hit, because NbE internalizes δ rules so much that they are transparent. It's fine to use NbE for the spec though.
Isn't there a way to make NbE as sophisticated as the "reduction based" algorithm though? Basically reify an ident as either a "neutral" or a "value" depending on a similar heuristic as currently. This requires adding a new class of neutrals though.
Maybe @Jonathan Sterling has some insight.
@Cody Roux
Would you count sth like https://github.com/inc-lc/ilc-scala/blob/master/src/main/scala/ilc/feature/let/BetaReduction.scala#L175-L181 as an example? That (claims to) handle shrinking reductions, confirming the idea's pretty flexible, as long as you have one notion of normal form and at least for something like "defunctionalized" NbE, where you still use a custom evaluator, as used by Andreas Abel and others (including http://www.jonmsterling.com/pdfs/modal-mltt.pdf with @Jonathan Sterling and others). You can probably partially evaluate that to improve the constant factors (the extra checks would happen in the first stage), but I'm not sure how well it'd fit with native compute.
However, that's for normalization, while Coq needs conversion checking, and I think "normalize and compare" is far from what happens... the experience (when abusing Coq) is that it tries to not unfold your definition, heuristically, and backtracks to unfolding if the first attempt fails.
with VM and native normalize and compare is fairly accurate AFAIK
I say "abusing Coq" because this is visible if you abuse Coq by chain enough unsealed definitions (as I did in my first Iris project): there type errors show up as "apply
seems to loop forever instead of terminating instantly".
@Gaëtan Gilbert right, but I suspect that "NbE internalizes δ rules so much that they are transparent" hints at this kind of seeming backtracking?
deltas are transparent in vmnative
right, because that's using (non-type-directed?) NbE (strong reduction's another name for it), so that can't be the @Pierre-Marie Pédrot's point, can it?
most conversions are checked with lazy, not vmnative
Yes, indeed the heuristics for reduction might be distinct from the heuristics for conversion checking, though I feel like that definitely doesn't preclude using some flavor of NbE. @Paolo Giarrusso that Scala code is really cool!
Last updated: Oct 03 2023 at 04:02 UTC