I have just had a look at https://hal.inria.fr/inria-00434283/document (Efficient Normalization by Evaluation by Mathieu Boespflug). It looks like it might be a proposal for a significantly less finicky implementation of the VM, but I can't quite tell. Has the knowledge of this document been integrated into Coq already? What is the current state of things?
Native compute is related to nbe, I don't know about this paper exactly cc @Maxime Dénès
Both the Coq VM and native compilation are a form of NbE, and I don't see in this paper anything outstandingly different from what we do.
Given the author I wonder whether it's not even indirectly linked to the implementation of the VM.
I implemented native_compute
after an internship with Mathieu :)
so it is more than related to this paper
The paper says:
The machine of Grégoire and Leroy (2002) which COQ sometimes uses for the conversion test should also be mentioned here. Theirs is a modified and formalized version of a bytecode interpreter for OCaml (the ZAM), to do normalization via reduction to weak head normal form along with a readback phase to restart weak reduction under binders. Whilst offering striking similarities to NbE, including in its reuse of existing evaluators, one important difference lies in the fact that the implementation of the underlying evaluator needs to be modified, whereas the objective of NbE, here and elsewhere, is to get away without looking under the hood. As a side effect, NbE affords more freedom of choice regarding which evaluator to choose, allowing for instance to trade off minimizing the trusted base for better performance.
The principal extension made to the ZAM to normalize COQ terms is the introduction of accumulators, which represent applications of free variables to a number of terms. Embedding this construct within the virtual machine avoids having to do case analysis at every application to discriminate between function applications and applications of neutral terms. We show that with the simple optimization of Section 3.1, the overhead of this case analysis is very small in practise.
So I'm wondering if the recent troubles with needing to change the accumulator tags, etc (I've seen a bunch of issues/PRs about this recently, IIRC), could be avoided with this paper?
I don't recall what was the status of benchmarks when this paper was written, but when we worked more on this, we showed that having a natural representation for function application did matter in terms of performances (not only you avoid the case analysis, but you also give the compiler more opportunities for applying optimizations, etc).
So in theory, the handling of tags as done by the VM is not necessary (you can implement the same calculus with an ADT of terms and case analyses for applications), but if you care about performance, the discrimination using tags is better.
That being said, I'm sure there are some solutions to the issues that are a good compromise. I haven't had the time to review the PRs yet, but will do it soon with Benjamin Grégoire.
With Jon, we wrote a short note about NbE and gluing
https://arxiv.org/abs/1809.08646
Gluing has recently become a popular tool in type theory. E.g.
https://drops.dagstuhl.de/opus/volltexte/2019/10532/pdf/LIPIcs-FSCD-2019-25.pdf
https://www.jonmsterling.com/pdfs/sa20.pdf
Last updated: Dec 07 2023 at 09:01 UTC