I've written a recursive function, and I want to just have a look at what output it produces for an example input (closed term). The output has quite a complex inductive type, containing record types, function types, &c. I find that
Eval vm_compute, &c take more time than I can wait for, and
Eval simpl, &c don't normalise as deeply as I would like. Is there some sort of middle ground, where I can reduce the whole term to WHNF and then reduce each of the arguments of the resulting head constructor to WHNF, or something like that? Being able to do so with some amount of interactivity would be nice.
This is odd. If you apply a function to a closed term, Eval vm_compute and even Eval cbv should be reasonably fast.
I see two possibilities:
You can analyze the run time for increasing term complexity with
Time Eval cbv ... and analyse the asymptotic behavior of your code.
In case I need to do deep reductions on terms with variables, I usually use an explicit delta symbol list with cbv, say
cbv delta [sym1 sym2 sym3 ...] beta iota zeta.. The symbol list can be lengthy - I am working on Ltac2 functions to manage symbol lists, say all symbols from module X, but this is not yet ready.
I think what slows it down is that, deep down, there are two bijections (represented as two functions and two proofs) in the output which I don't really care about seeing, but are very big (even just the functions are big
match expressions, which I think were ultimately generated by Equations). So maybe this suggests that limiting delta-reductions will give me a good enough result.
OK, to get fast you should separate computational stuff from proofs.
cbv honors the opaqueness of proofs, but afaik vm compute and native compute don't, so this can get quite ugly. In general you should write a function without any proof content and prove something about this function.
The functions in the bijections are computational, it just happens that I don't want to see them in their entirety.
Please note that Equations also comes with dedicated simplification functions for the functions it creates (which are opaque by default afair, so cbv wouldn't expand them). Maybe this opaqueness of Equations defined functions is your problem. You can fix this by either using Equations simplifier or by making them transparent - otherwice cbv won't expand them.
Such unexpanded functions can lead to arbitrary term blow up.
cbv honors the opaqueness of proofs, but afaik vm compute and native compute don't, so this can get quite ugly.
You're right about
Opaque, but they all honor
Qed, which is the right way to make proofs opaque.
@James Wood it might be appropriate to "seal" those functions — using coq-elpi's
mlock commands (or the ssreflect pattern by hand) — or project out the relevant parts from the output
the problem with sealing, of course, is that it'd made those functions non-computational, which might not be appropriate.
To address the easy parts: I'm not projecting out the relevant parts because they're underneath inductive constructors. I know there are tricks I can do to get around that, but it takes effort. I'm starting to think, though, that this is the lowest effort I can achieve.
@Paolo Giarrusso : thanks, interesting, I didn't know that - I assumed Qed just sets the strategy. But what does Qed do then technically? Do definitions have something like an "original opaqueness"?
Yes, there is an original opacity: Qed produces something closer to an axiom, because the name is not definitionally equal to the body.
Moreover, Opaque cannot replicate this behavior, because existing terms could become ill-typed.
Ssreflect's module sealing uses the only point to make things opaque "a bit later": module sealing.
https://github.com/LPCIC/coq-elpi/blob/master/apps/locker/README.md#example-of-mlock is a good summary of the pattern, if that helps
Last updated: Sep 23 2023 at 14:01 UTC