Yesterday I had a longer discussion with an Isabelle user about computation and simplification, so I read the corresponding section in the Isabelle manual (Section 9.3.2 "declaring rules in Isabelle). I generally very much appreciate Coq's inherent notion of computation, but for open terms simpl, cbn, cbv & Co more frequently go wild than do something useful unless one uses very detailed fine grain control, either via Strategy or explicit delta expansion. The structured rewrite mechanism of Isabelle - apparently based on bottom up recursion over the term + head symbol matching rather than pattern matching over complete terms - seems to be efficient, offers the control I would like as a user, can simplify both 0+x and x+0 and doesn't loop over commutativity rules. That the main Coq simplifiers (simpl and cbn) do only convertible simplifications (e.g. not x+0=x) honestly seems to be a bit dogmatic.
Does something like this exist already? If not, shouldn't we have both in Coq, efficient computation for closed (or otherwise reasonably computing) terms and an Isabelle style rewrite for open terms? It doesn't seem to be that complicated to implement e.g. in Ltac2 (maybe with some OCaml optimizations of id hashing, but it should work reasonably well with Ltac2 as is).
Btw.: is rewrite_strat bottomup (hints <somedb>)
efficient in the sense that it uses a head symbol hash table for rewrite hint databases? And does it in the end boil down to pattern matching over complete terms or does bottomup
lead to an efficient piecewise rewrite, say by bottom up construction of an equality proof for the complete term which is used in the end? Besides such efficiency considerations, the main thing I would be missing from rewrite_strat
is the concept of an order for terms, which Isabelle uses to avoid looping e.g. for commutativity rewrites, so that one doesn't need a separate tactic like AAC. One could have an ordered
strategy in rewrite_start
to allow only rewrites which are decreasing wrt the term order. And I guess one would need some way to name a rewrite strategy for reuse and construction of more complex ones.
Maybe @Matthieu Sozeau and @Fabian Kunze as main authors of existing rewrite technology can comment?
I don't see myself as expert/main author in this area (I only maintained AAC for a while)
@Jason Gross knows a lot about rewrite_strat
and al.
rewrite_strat
is a step in the right direction but there are two issues that prevent it from being efficient:
repeat setoid_rewrite
, it's still quadratically sized in cases where there is only linear work to be done. Doing better than this is actually quite involved because it requires careful sharing of terms in let binders (and moreover requires a careful bookkeeping analysis to show that the resulting term is in fact linear in the size of the goal plus the amount of rewriting to be done, rather than linear in their product), and moreover I believe it requires a change to the way that Coq the pretyper handles let binders and a change to the way that the kernel typechecks let binders to ensure that we can generate and check a term in time that is linear in the number of let binders rather than quadratic. (Also we can't use evars when generating this term again because evars have bad asymtotics in the size of the context.). I have some plans to at least write up this algorithm in the hopefully-not-to-distant-future. Also, it's possible @Pierre-Marie Pédrot has already implemented the kernel or the pretyper change or both; I can't quite recall.Finally, rewrite_strat
still has a number of bugs, for example it generates incorrect proof terms if you do reduction and then rewriting rather than doing some rewriting first. And it only provides topdown and bottomup strategies, neither of which is actually the strategy you want to be using when binders are involved (AFAICT, the correct strategy here is the one of normalization by evaluation, which is what the VM, native compiler, cbv, and, I think, lazy all use, where your rewriting is bottom-up except when you hit lambdas, whose bottom-up rewriting gets thunked/delayed until you discover if higher up rewriting rules are going to result in the lambda being applied to something. Not using this strategy means that rewrite_strat
will go under binders to find head-symbol matches without actually finding rewriting opportunities far more than is actually necessary.)
Regarding ordered
that is not too hard to implement; just add a side-condition to your lemma with a dummy type like Definition in_order {A B} (a : A) (b : B) := True.
and then tell the hint database that all side-conditions of your lemma need to be solved with Ltac2 prove_in_order
which checks the ordering and only solves the trivial goal if the terms are in order.
I guess I should have started with: AFAIK rewrite_strat
is efficient in both of the senses you mention, using head symbols to discriminate (though maybe only if you Set Keyed Unification
?) and building a piecewise term. (It's just that these other efficiency concerns still give it the wrong asymptotics)
@Jason Gross : thank you very much for all the valuable details! Do you think it makes sense to profile rewrite_strat against Isabelle for some synthetic use cases and see where we stand?
A few questions:
Regarding 1.) if I understand you right, rewrite_strat
instantiates the premises of rewrite lemmas with evars before it checks if the lemma actually matches. From your further description I take it that it does this only if the head symbol matches, but further unification fails. Is this correct?
Regarding (1) the instantiation with evars is how it tells whether not the lemma matches. This is in part because matching is not syntactic except on the head symbol and so something like evars are needed to perform unification
Profiling rewrite_strat against Isabelle might be nice, but I'm not sure it makes sense to investigate when there are some well-understood severe inefficiencies in the current implementation
Last updated: Oct 13 2024 at 01:02 UTC