In software foundations volume two it says the following
We can say that [\x:T, t] is a value only when [t] is a
value -- i.e., only if the function's body has been
reduced (as much as it can be without knowing what argument it
is going to be applied to).
Or we can say that [\x:T, t] is always a value, no matter
whether [t] is one or not -- in other words, we can say that
reduction stops at abstractions.
.....
Our usual way of evaluating expressions in Gallina makes the first
choice.....Most real-world functional programming languages make the second
choice
I am trying to understand what are these choices called, what are their implications on the language and where can I read more about them as a theory. and why did coq pick the choice that no other programming language did.
There are lots ways to name them and all of that is a bit folklore, so I don't know what a good reference would be. Anyway, the first choice can be named "strong reduction" or "normalization", while the second one can be named "weak reduction" or "evaluation". As for why Coq chose normalization rather than evaluation, it is because it is a proof assistant based on type theory, not a programming language.
Note however that for example extraction relies on the second notion of value (weak call-by-value or weak call-by-name). A recent reference analysing a host of different reduction strategies is https://types22.inria.fr/files/2022/06/TYPES_2022_paper_60.pdf
what is "Our usual way of evaluating expressions"? Eval lazy
? simpl
? something else
both go under lambdas but the reasoning may differ between them
why is weak reduction not suitable for proof assistant ?
Proof checking based on type theory requires the ability to check that two different terms have the same normal form. Weak reduction does not make it possible to answer that question, while strong reduction does. So, strong reduction becomes the natural choice.
got it thanks
Also I will appreciate all resources you can throw at me on that topic.
the reason I am asking is because I trying to create toy dependently typed language and that was something where I saw different people do different things with little explaination
Guillaume Melquiond said:
because it is a proof assistant based on type theory, not a programming language.
I think it's worth being a bit more precise, given that Idris, for example, is a programming language, but has use for strong reduction (along with weak reduction). Later replies have more or less covered it: In type checking, you use normalisation to equate as many terms as possible, hence choosing strong reduction; while in execution, you only care about first order values (the kind that can be printed to the console), so weak reduction is enough.
that is interesting to see both approaches combined.
Conversely, quite a few conversion checkers in the dependently typed setting (e.g. the one implemented in MetaCoq, which is quite close to Coq’s kernel) actually only ever rely on weak reduction. In the case of functions for instance, rather than normalizing them both (strong reduction) and comparing them, what you do is weakly reduce both sides you are comparing, check that both terms you get are λ-abstractions, and recursively compare domain and body of those (which will again call weak reduction, and so on). In effect, you perform a kind of "implicit" strong reduction by iterating weak reduction.
@Meven Lennon-Bertrand It sounds to me like you're describing doing strong reduction lazily rather than any form of weak reduction. There are various points where conversion checkers don't fully reduce the two terms to values before comparing them, but this is largely orthogonal to strong vs weak (i.e, open vs closed) reduction.
I guess it depends where you set the line between both. If your criteria is open vs closed context, then indeed, this is closer to strong reduction than weak reduction because recursive calls happen under binders. But on the other hand there are multiple heuristics to avoid doing (strong) reduction all the way: not unfolding definitions when it is not needed, comparing terms for syntactic equality before reducing them, strict propositions which are not reduced… Which means that an algorithm like this is not merely a different way to organize strong reduction: the whole point is to avoid it as much as possible.
Anyway, my point was merely to emphasize that even in a very "type theoretic" setting, weak(-head) reduction is often an important tool.
Last updated: Oct 13 2024 at 01:02 UTC