I originally came to coq after using Idris, and never picked up tactic usage. By now I think I'm quite adept at navigating dependent pattern matching, fixpoints, and cofixpoints, but of course terms written directly this way get very tedious as they get larger. Are there any good intros to tactics for someone who knows how to work in this way?
In particular, should I think of tactics as metaprogramming over the constructs I already know or as a standalone language for "proofs" that happens to be compiled to the constructs I know? One thing I like about defining by writing out the Gallina directly is that I have a pretty small set of composable primitives that make up everything, and once I understand those I can at least understand any term I find; if I think of tactics as metaprogramming I think I can recover that sense (though the set of primitives gets larger) but if I'm thinking of it as a proof language I'm not sure how to find the "core", if there even is one.
Given your background, yes you should think of tactics as meta programming.
And you can use the tactic
refine as the basic tactic that just allows you to transform a "hole" into something more precise (that may still contain holes).
But to avoid being confused, you should also understand the reason why some tactics are often attempting some "magic".
This is because some other users want a tactic-based proof to remind them of a proof on paper.
This "magic" takes you further away from the meta programming intuition.
Thanks! So my current rough intuition for the basic concepts is that I have some term with holes, and I'm writing a program that can inspect the term itself, understand the term's context (e.g. what's in scope), build up terms (including ones with new holes), etc. Plus I get a backtracking monad, and some databases (keyed on what?) that give me hints I can try. Is that basically right?
Hint databases are supposed to be keyed on the head constant of their conclusion, but I'm not sure whether that's absolutely true in practice.
I'm also using Coq after Idris and I personally find ssreflect's tactics more intuitive, as they do form a small set of fairly composable primitives. I like to think about them as a mixed concatenative-logic (think Forth+Prolog) metalanguage: there's a stack in your goal(s) and you exploit the unification engine to move forward.
recent automation like sauto makes it possible to use the "Isabelle style" where most statements inside a proof scripts are intermediate facts (and not much other detail is given):
have fact1: [...] by sauto. have fact2: [...] by lia.
This style is likely to be the one that is easiest to transfer across systems: https://arxiv.org/abs/1804.07860
personally I think the default Ltac style is a bad compromise between maintainability and productivity. SSReflect makes scripts extremely robust (fail early) and detailed, but they not very "permissive" w.r.t. changes in definitions. The Isabelle style is much more permissive in this sense and demands much less detail from the user. Default Ltac falls in between.
if I was going full on dependent types I would probably live by Equations and
refine, even for proofs. But even then, I think it's difficult to be productive in this approach.
Q: how does
have survive changes in definitions?
my experience (probably with bad proofs) was with assertions whose statements depend on internals.
have statements will likely depend on internals, but to a lesser degree than a lot of low-level tactics (which may even be tied to the declaration order of inductive type constructors)
the paper gives examples of how statements can survive being translated between proof assistants
Also, that experience was with the common
unfold-and-mash proofs, not in math-comp ones.
Last updated: Jan 27 2023 at 00:03 UTC