I am once again hitting the limits of what seems possible with reflective tactics. One reason is that I cannot get Coq to perform the necessary computation at Qed time in an efficient way. I know which reduction is necessary and I can perform, for example, the required lazy
invocation at tactic runtime to compute my terms. However, even using these terms in casts in the proof term does not help sufficiently with Qed performance. I am already forcing every term that can be forced. Unfortunately, the tactic deals with telescopes and a whole lot of telescope terms cannot be forced because they are basically unapplied functions. Additionally, the terms my tactic generate contain terms provided by users so a vm cast is out of the question. If not for that, vm casts would probably be quite helpful to at least reduce the constant factors.
It seems to me that there are several features that could potentially help in this situation and I wanted to ask if any of them were realistically implementable:
lazy
), also with flagsWould you be happy with weak-head reductions for cbv or lazy? This would allow implementing the "special constructor" just as a thunk.
Can't 3. already be implemented with modules and module types?
I don't see how to do that in a modular way (pun intended). You'd have to define an opaque constant for every term you want to block.
I agree but if there are only a few, that may remain tractable.
3 is similar to what is done in MetaOCaml, which would probably be a fairly invasive change. It could probably be approximated (with some drawbacks and limitations) by a function VMstop : forall t, t -> t
such that when vm_compute
sees VMstop _ t
it does not reduce t
.
The kernel is already performing only weak-head for both lazy and cbv conversions, I think it'd be very easy to expose these reduction functions to the end user. Then you can lock computation by thunking either under a function or a coinductive block (for sharing).
I am not sure if weak-head reduction would be enough. And I am saying that as somebody who uses weak head reduction for almost everything. The problem is that it is only efficient if the terms can be forced. The reason we use lazy
right now is that it can reach terms that we cannot force with weak-head reduction. I fear anything that is restricted to weak head is going to suffer from the same problems we already see with Coq's typechecker at Qed time.
A term blacklist (no matter how it is implemented) is not a solution for a tactic that ought to work with arbitrary user terms. Even if users can extend the list somehow, it would still be too much to ask to register every single constant. We use whitelists extensively in reduction but those don't even have a corresponding feature such as module types so they don't translate to proof terms at all, not even in a painful way.
@Pierre-Marie Pédrot independent of their applicability to the current discussion, I support any plan to implement weak-head variants of reduction strategies :)
Last updated: Oct 13 2024 at 01:02 UTC