Stream: Coq users

Topic: how Coq treats values vs every other programming language


view this post on Zulip walker (Jul 28 2022 at 07:34):

In software foundations volume two it says the following

We can say that [\x:T, t] is a value only when [t] is a
value -- i.e., only if the function's body has been
reduced (as much as it can be without knowing what argument it
is going to be applied to).

Or we can say that [\x:T, t] is always a value, no matter
whether [t] is one or not -- in other words, we can say that
reduction stops at abstractions.
.....
Our usual way of evaluating expressions in Gallina makes the first
choice.....Most real-world functional programming languages make the second
choice

I am trying to understand what are these choices called, what are their implications on the language and where can I read more about them as a theory. and why did coq pick the choice that no other programming language did.

view this post on Zulip Guillaume Melquiond (Jul 28 2022 at 07:56):

There are lots ways to name them and all of that is a bit folklore, so I don't know what a good reference would be. Anyway, the first choice can be named "strong reduction" or "normalization", while the second one can be named "weak reduction" or "evaluation". As for why Coq chose normalization rather than evaluation, it is because it is a proof assistant based on type theory, not a programming language.

view this post on Zulip Matthieu Sozeau (Jul 28 2022 at 08:14):

Note however that for example extraction relies on the second notion of value (weak call-by-value or weak call-by-name). A recent reference analysing a host of different reduction strategies is https://types22.inria.fr/files/2022/06/TYPES_2022_paper_60.pdf

view this post on Zulip Gaëtan Gilbert (Jul 28 2022 at 08:36):

what is "Our usual way of evaluating expressions"? Eval lazy? simpl? something else
both go under lambdas but the reasoning may differ between them

view this post on Zulip walker (Jul 28 2022 at 08:37):

why is weak reduction not suitable for proof assistant ?

view this post on Zulip Guillaume Melquiond (Jul 28 2022 at 08:39):

Proof checking based on type theory requires the ability to check that two different terms have the same normal form. Weak reduction does not make it possible to answer that question, while strong reduction does. So, strong reduction becomes the natural choice.

view this post on Zulip walker (Jul 28 2022 at 08:40):

got it thanks
Also I will appreciate all resources you can throw at me on that topic.

view this post on Zulip walker (Jul 28 2022 at 08:41):

the reason I am asking is because I trying to create toy dependently typed language and that was something where I saw different people do different things with little explaination

view this post on Zulip James Wood (Jul 28 2022 at 08:46):

Guillaume Melquiond said:

because it is a proof assistant based on type theory, not a programming language.

I think it's worth being a bit more precise, given that Idris, for example, is a programming language, but has use for strong reduction (along with weak reduction). Later replies have more or less covered it: In type checking, you use normalisation to equate as many terms as possible, hence choosing strong reduction; while in execution, you only care about first order values (the kind that can be printed to the console), so weak reduction is enough.

view this post on Zulip walker (Jul 28 2022 at 08:49):

that is interesting to see both approaches combined.

view this post on Zulip Meven Lennon-Bertrand (Aug 17 2022 at 09:36):

Conversely, quite a few conversion checkers in the dependently typed setting (e.g. the one implemented in MetaCoq, which is quite close to Coq’s kernel) actually only ever rely on weak reduction. In the case of functions for instance, rather than normalizing them both (strong reduction) and comparing them, what you do is weakly reduce both sides you are comparing, check that both terms you get are λ-abstractions, and recursively compare domain and body of those (which will again call weak reduction, and so on). In effect, you perform a kind of "implicit" strong reduction by iterating weak reduction.

view this post on Zulip James Wood (Aug 17 2022 at 10:14):

@Meven Lennon-Bertrand It sounds to me like you're describing doing strong reduction lazily rather than any form of weak reduction. There are various points where conversion checkers don't fully reduce the two terms to values before comparing them, but this is largely orthogonal to strong vs weak (i.e, open vs closed) reduction.

view this post on Zulip Meven Lennon-Bertrand (Aug 17 2022 at 10:20):

I guess it depends where you set the line between both. If your criteria is open vs closed context, then indeed, this is closer to strong reduction than weak reduction because recursive calls happen under binders. But on the other hand there are multiple heuristics to avoid doing (strong) reduction all the way: not unfolding definitions when it is not needed, comparing terms for syntactic equality before reducing them, strict propositions which are not reduced… Which means that an algorithm like this is not merely a different way to organize strong reduction: the whole point is to avoid it as much as possible.

view this post on Zulip Meven Lennon-Bertrand (Aug 17 2022 at 10:23):

Anyway, my point was merely to emphasize that even in a very "type theoretic" setting, weak(-head) reduction is often an important tool.


Last updated: Jan 29 2023 at 06:02 UTC