I was surprised to hear:
Emilio Jesús Gallego Arias said:
math-comp is an amazing tool, and so far it has not been surpassed IMO, however it is optimized for one particular thing: ultra expert users writing a particular set of proofs in a particular style [for example, no automation]
Why would it be required not to use automation? How often is this requirement put forward? What is the story?
Ignat Insarov said:
I was surprised to hear:
Emilio Jesús Gallego Arias said:
math-comp is an amazing tool, and so far it has not been surpassed IMO, however it is optimized for one particular thing: ultra expert users writing a particular set of proofs in a particular style [for example, no automation]
Why would it be required not to use automation? How often is this requirement put forward? What is the story?
As another mathcomp novice, I'm also really curious to the answer here—I have some cognitive dissonance since I recently read CPDT with its praise for heavy automation (and, on a more personal level, I miss letting auto
do the easy stuff for me).
My best answer to that question is section 4.3 https://zenodo.org/record/4457887#.YNGPozqxW8Q
The short answer is that mathcomp has no CPDT style automation because of maintenance cost. I invite you to look at the CPDT mercurial repository. MC works unchanged with many versions of Coq, and it's maintenance cost has been close to zero for as long as I can remember.
Said that, I'd love to have reliable automation. Some of it is (slowly) reaching mathcomp, eg https://github.com/math-comp/algebra-tactics and https://github.com/math-comp/mczify
Enrico Tassi said:
My best answer to that question is section 4.3
It's hard to compete in an ad-hoc manner with an answer that consists of a book page. :smile:
Now that I look at it again (and fix a few typos), section 4.3.1 is about readability of proofs, and takes addnCA
as an example of one of these lemmas which convoys very little to the reader, it's a boring details to shuffle parentheses around, so better be named in a compact way and fly under the radar. Hopefully to be replaced by ring
soon ;-)
Wow, I have the version Draft version Sat, 11 Aug 2018 18:11:31 +0200 , v1-183-gb37ad74
of the book and it has a completely different table of contents… Imma go download the new version!
I must add that it is a common misconception that there is "no automation" in mathcomp. There is heavy automation for structure inference, small scale computations, simultaneous case analysis or application of various views... We usually refer to it as "small scale automation", because the scope of these automation is well delimited and "short range" (for some definition of short).
It is true at the moment, that mathcomp does not use automation tactics such as ring, field, lra, at the moment for purely technical incompatibilities, but it is going to get better.
The only automation mathcomp tries to avoid at all costs is unreliable/unpredictable forms of automation, e.g. one should never rely on the result of ring_simplify
, or on tactics which heuristic is too fuzzy.
Enrico, I looked at that repository — I suppose you refer to commits like these? _(To be read upside down: newer is upper.)_
2019-01-21 Adam Chlipala Back to working in Coq 8.4
2019-01-21 Adam Chlipala Port to Coq 8.9.0
2017-07-12 Adam Chlipala And working with 8.6 again
2017-07-12 Adam Chlipala Working again with Coq 8.4
2017-07-12 Adam Chlipala Fixes for Coq 8.6
So, turns out there are some underwater rocks one should be careful about when wielding automation in Coq, and there is no agreement whether automation should be embraced or avoided.
But what exactly are the requirements that the common automation tactics like ring
and auto
do not fulfill at this time but may fulfill in the future? Are these shortcomings going to be fixed in Coq itself, or is it going to be limited to the versions of said tactics that would be provided by mathcomp
? What kind of _«technical incompatibilities»_ are there?
Ignat Insarov said:
But what exactly are the requirements that the common automation tactics like
ring
andauto
do not fulfill at this time but may fulfill in the future? Are these shortcomings going to be fixed in Coq itself, or is it going to be limited to the versions of said tactics that would be provided bymathcomp
? What kind of _«technical incompatibilities»_ are there?
First, one has to distinguish between tactics that can be assumed to reliably close goals from a well-specified domain (e.g. ring
) and tactics whose behavior is highly customizable and unpredictable (e.g. auto
). The former is okay in principle, the latter less so. One shortcoming of tactics like ring
is that they tend to be very "syntactic" in the way they regard the goal whereas the type inference for statements in mathcomp often leads to terms which have distinct (but convertible) subterms. In order to work in this setting, the tactic has to identify these convertible terms. However, conversion can be expensive, so this is not entirely trivial to achieve.
I see. But I thought auto
could _(and should)_ be made reliable by giving it custom hints?
The point is that adding additional hints can cause auto so solve goals it didn't before or cause it to no longer succeed (in reasonable time). Also, by default auto never actually fails, so the result of "tac; auto" can change in unpredictable ways. Something like tac; first by auto
is a lot more acceptable, similary for by tac;auto
. Just never run auto without ensuring that it actually closes the goal you give it.
Yes, so my idea was that you get a hints data base that is cast in stone and presto, your reliable automation. Any addition to the hints would be a breaking change, but any addition of names to the interface of a module is a breaking change also — it is nothing new. Or one might even build a bespoke hints data base for a given theorem if the proof is long enough for it to be worthwhile. Then one can say Proof with auto.
and write like tactic…
Why not?
Has it been tried and found impractical?
you may be overestimating how big of a deal auto
is. It just saves a bunch of apply
, which with ssreflect is one line anyway.
Why? auto
can wield any Ltac code, once it is hinted at with Hint Extern
.
ah, that's true. Nevermind.
Example:
Inductive ℕ' := Z': ℕ' | S': ℕ' → ℕ'.
Hint Extern 4 => match goal with [H: ?t |- _] => injection H end: core.
Theorem successor_is_injective: ∀ n m: ℕ', S' n = S' m → n = m. Proof with auto. idtac... Qed.
I mean how cool is that? Has anyone tried to solve through, say, _Logical Foundations_ with auto
and eauto
alone?
auto
is barely more clever than enumerating all proof trees and checking that you hit the desired proof at some point.
With Hint Extern
, you can add more logic, but then, your problem is reduced to the design of an ATP than can handle Logical Foundations
. I doubt any ATP can handle it as of today. You might come with a set of Hint Extern
that solves this particular problem, but I suspect this set of hints will be irrelevant for other problems (IMO, you'll have some overfitting here)
I solved through _Logical Foundations_ some time ago and I can tell you that:
It would flatter my intelligence to think that an automation that solves, say, 90% of _Logical Foundations_ would be hard to design. And solving the easiest 90% of problems automatically would make a big difference.
You cannot ignore the search time here.
You also need to take into account the fact that there are very different kinds of problems one can state in Coq, with wildly varying effectivity of auto-like tactics.
One one side of the spectrum, you have proofs where the structure is already explicit in a "dependent" way. For instance, proving stuff by induction on a list followed by trivial reasoning.
The only intelligence needed is to know what to induct on (and even that is fairly obvious sometimes).
This is stuff common in CS-y areas, and a reason why Adam Chlipala uses auto with success.
Pierre-Marie Pédrot said:
This is stuff common in CS-y areas, and a reason why Adam Chlipala uses auto with success.
But even for symbol pushing, auto
might diverge quite rapidly.
Said otherwise, the logical difficulty is in the design of the structures.
@Pierre-Yves Strub yes, indeed, but let's say it's still reasonable to use it.
But on the other side of the spectrum, which is common in mathematics, you have proofs where auto-like tactics have literally zero chance of success.
It will diverge well before finding something that could work, because of the combinatorials of the proof search.
As a matter of fact, no sane expert believe we can do real maths with ATPs.
(obviously, the trick is in the definition of "real math", similarly to the recent progresses of AI)
This is literally the advantage of ITP over ATPs. If ATPs could do that, nobody would use ITPs.
Many Ssreflect tactics heavily rely on HO unification which can be costly. Using them in a brute force manner would be a bad idea in general.
SAT is already NP-complete, and intuitionistic provability is PSPACE-complete. At some point it becomes obvious that one shouldn't repeatedly search for proofs that were already done.
I see.
But I do not really say that a maximally A kind of TP should be wielded exclusively. My message is that the hard stuff may be encoded into Ltac, rather than being written out every time. Hopefully a carefully designed set of hints can solve a whole class of goals. In other words, I ask whether it may be worthwhile to transfer a significant deal of the I part into the design of tactics.
Is there a classical example of a problem that I could solve but auto
could not, for any set of hints?
Or, you know, something like an enumeration of problems, each of which requires a different bespoke set of hints to be automatically solved?
In yet other words, it seems to me that the real choice is either to have:
Re https://coq.zulipchat.com/#narrow/stream/237977-Coq-users/topic/Automation.20and.20.60mathcomp.60.2E/near/244015157, I empathize with “it’s still hard to do proofs”. It takes a while to learn. I’m not sure if good automation for the easiest 90% would make it easier to solve the rest — it’d become quicker, but it’d still be hard. I’ve used Isabelle a bit ages ago, and Isabelle automation (not me) did prove some theorems, but when it was stuck I had no idea why.
Here's a few things that are harder to do with auto
:
auto
solves a goal, then you refactor your definition and the proof breaks. You need to go back to a working version, use debugging to see what auto
did, inline that into a proof script, then see where it goes wrong.auto
does not see through definitions, for performance reasons; so your proof might become unfold foo, bar, baz. auto.
, where the uninteresting thing is explicit, and the actual proof is implicit. apply lem1, lem2, lem3.
mostly works regardless of unfolding.Finally, you say “hopefully a carefully designed set of hints can solve a whole class of goals”. I know some examples, but even when I used CPDT, it was never clear to me _how_ to design such tactics; the book only shows some examples, but I haven’t seen any design principle.
Last updated: Oct 04 2023 at 23:01 UTC