Suppose that I want to write some Ltac/Ltac2 to automatically solve inequalities `n < m`

, and I want one of the strategies to be one of the following:

- First, prove
`?k < m`

, where the evar`?k`

is to be instantiated during the proof search along with the proof. - Then, apply the lemma
`n <= k -> k < m -> n < m`

, which reduces the problem to proving`n <= k`

.

How can I express this in Ltac or Ltac2?

First, prove

`?k < m`

, where the evar`?k`

is to be instantiated during the proof search along with the proof.

A *proof* is a closed term, so the only way I can make sense of that is some *backtracking* over possible instantiations of `?k`

, say `k`

, s.t. `k < m`

.

OTOH, that doesn't go with "delaying goals", which hints at keeping evars around, so I am not really sure what you are asking...

Backtracking since, given any such `k`

, we are not done:

Then, apply the lemma

`n <= k -> k < m -> n < m`

, which reduces the problem to proving`n <= k`

.

How is proving `n <= k`

any simpler than finding `k`

s.t. `k < m`

? In other words, overall, aren't you just squaring the search effort by doubling steps?

(Maybe the example is *not* to be taken as strictly representative? But then I cannot pinpoint what you are after: just "delaying goals" is too generic.)

Are you perchance looking for `shelve`

? This tactic does not solve a goal or admit it, but simply put it on the side to solve later. They are either solved by unification or by using the `Unshelve`

command to bring back all shelved goals, or by using the `unshelve`

tactical such that `unshelve tac`

brings back all goals shelved by `tac`

.

Would something like `let k := fresh "k" in let km := fresh "km" in assert (exists k, k < m) as [k km]; [eexists; your_search|]; refine (Nat.le_lt_trans _ km).`

help? There might be smarter ways to introduce the evar, this is the simplest I could think of.

I went for a slightly more robust definition :

```
Ltac tac proofsearch :=
lazymatch goal with |- ?n < ?m =>
eapply (Nat.le_lt_trans n ?[?k] m);
[ | proofsearch; fail ]
| _ => fail
end.
```

Thank you all very much for your answers! I will try these and maybe come back with more questions if I can't figure out how to solve my problem

@Julio Di Egidio

To answer your question, yes I meant backtracking.

If one uses `eapply`

with the transitivity lemma, it will generate two goals `n<=?k`

and `?k < m`

; a full proof would be an instantiation of `?k`

together with a proof of both goals. I want to change the order of these goals so that the goal `?k < m`

comes first, and the first goal is not pursued until `?k`

is instantiated.

The motivation behind the example was that in my experience, it is nontrivial to find `k`

less than `m`

- of course this cannot be done for arbitrary `m`

because `m`

may be zero, it can only be done using hypotheses in the context. But it is trivial to find `k`

with `n<=m`

, one can always instantiate `k=n`

and use the reflexivity constructor for `<=`

.

If the first goal that comes under focus is `n <=?k`

, it will be all to easy to automatically discover an instantiation of `k`

together with a proof that the goal is solvable, therefore one has to carefully design the automation to choose an instantiation for `k`

other than `n`

itself, which is difficult because you cannot apply general proof techniques like "try to unify the goal with a constructor". On the other hand, successfully identifying a number `k`

such that `k < m`

is more likely to be helpful, as such a number must necessarily be identified with the help of relevant hypotheses in the context.

With the premise that I still have very little familiarity with writing tactics overall, so I hope this just helps narrow things down:

I want to change the order of these goals

There exists a `swap`

tactic.

The motivation behind the example was that in my experience, it is nontrivial to find

`k`

less than`m`

- of course this cannot be done for arbitrary`m`

because`m`

may be zero, it can only be done using hypotheses in the context. But it is trivial to find`k`

with`n<=m`

, one can always instantiate`k=n`

and use the reflexivity constructor for`<=`

.

That just doesn't look right to me, I don't see any significant difference between "finding" `n`

s.t. `n <= m`

and "finding" `n`

s.t. `n < m`

: the difficulty as well as the logic is exactly the same, e.g. systematically "try" values of `n`

from `0`

up to `m`

, with `m`

excluded iff the inequality is strict.

But the real problem for me is that "find `n`

s.t. `n <= m`

" is not sufficiently specific to even get to thinking actual code: I'd need a concrete theorem statement to think about/play with... (maybe others can do better, i.e. there is a general way to think about it: I just don't see it).

Julio,

I had a small typo in my comment which I have now corrected. I meant "But it is trivial to find k with n<=k". This probably does not answer your concerns but it is worth correcting.

More concretely suppose the goal is

```
n m a b : nat
H0 : n + (S a) = b
H1 : (b - 1 + 1) = m
------
n < m
```

Here, `n`

and `m`

are Coq variables which are not ground terms expressed in the form `SSSS...0`

, so it is not a feasible solution to systematically enumerate values between `n`

and `m`

in a loop. Sorry if this was unclear. Instead there are relevant hypotheses in the context from which it could be deduced that `?k < m`

, for some `?k`

and the problem is to search for a reasonable choice of expression for `?k`

(not necessarily a ground term).

In this situation it is reasonable to instantiate `?k := b -1`

because it is easy to prove that `b - 1 < m`

, and then it must be proven that `n <= b-1`

. It is also possible to instantiate `?k = n`

and immediately prove `n<=n`

by reflexivity of `<=`

but this hint could be applied repeatedly and cause an infinite loop, making the control flow more pedantic and finicky.

Instead there are relevant hypotheses in the context

from which it could be deduced that ?k < m, for some ?k

That is not something that can be "deduced", there is actually no `?k`

in that context and you'd have to "introduce" one, e.g. with an `eapply`

tactic. But more than that, maybe worth pointing out that, (AIUI) the core language (Gallina, a dependently-typed total functional language) does *not* have any notion of evar ("hole"), nor of unification or backtracking, these only exist at tactics language level (if they exist at all, AFAIK there is nothing mandatory about it), and I think it is important to keep these two levels clearly distinct at all stages. [In fact unification also happens in type inference, so I am sure I am missing pieces, but I think the distinction between the core language and "the rest" is per se fundamental.]

That said, I think those hypotheses are not really germane to the problem, substitute terms (as with the `subst`

tactic) and your goal boils down to *an inequation between two expressions* in nat:

```
n, a : nat
______________________________________(1/1)
n < n + S a - 1 + 1
```

and the problem is to search for a reasonable choice of

expression for ?k (not necessarily a ground term)

But equational reasoning is not about "search", i.e. it is not the same as the satisfiability problem. Namely, the goal above is equivalent to the *universal* statement `forall n a, n < etc.`

, while a satisfiability problem would be the *existential* statement `exists n a, n < etc.`

...

There are evars and holes in Gallina, tactic language usually refers to how tactics get chained.

The thing is that "kernel terms" must not have evars, but the terms manipulated by tactics have the same representation and I don't think it makes sense to call them different languages

There are evars and holes in Gallina

Are there? Existential variables only appear in Language extensions, and so do implicit arguments and so on, they are not part of the Core language.

But my point would be that, even if Coq didn't make that distinction neatly (I am in fact still not sure how type inference is "positioned" re all that), the distinction per se is *essential*, conceptually even before technically.

And for quite simple reasons, as mixing notions and things is never good: with an analogy, you could be using both theorems of analysis and of algebra in some problem solving, still trying to just write a theory of algebro-analysis would just be not a good idea; or, not layering and componentizing software; or, not applying the design principle of modularity; and so on... no?

I think my choice of the word "deduction" was misleading. I agree that you should not conflate "exists k, k<n" with "?k<n" using evars. I was thinking of deduction in the sense of logic programming, that the elaboration engine and unification, typeclasses and such can be viewed as implementing a logic program to find instances for the metavariables. But this is a different kind of logical deduction than a Gallina term.

Julio Di Egidio said:

There are evars and holes in Gallina

Are there? Existential variables only appear in Language extensions, and so do implicit arguments and so on, they are not part of the Core language.

"Core language" is ambiguous here. The documentation splits Gallina into "Gallina" (which we can call Core gallina) and "Gallina extensions". But the "Core language" of Coq usually designates "kernel terms", which have no concrete syntax.

I agree that you should not conflate "exists k, k<n" with "?k<n" using evars.

But that is not what I have said: those two AIUI *are* essentially the same problem, as opposed to the universal statement per your example theorem.

That said, I do see how that eventually is "logic programming", but I am reluctant to use that label: firstly, we are rather talking meta-programming there; secondly, while "holes" and "unification" may be essential to type and argument inference (I still cannot pin that down), that is not per se logic programming proper, just bits of it, and there is no mandate to go fully logical either: IOW, and this is really all I am saying, that is just not the same problem nor the same layer.

@Pierre Courtieu

"Core language" is ambiguous here. The documentation splits Gallina

I'd have said the opposite: there is a chapter titled "Core language" I have been referring to, not so much "Gallina" which I had a hunch I should not mention at all... :) And I'm still of the advice that making that and such distinctions is crucial, whether Coq (or the refman) has succeeded at that or not...

The theorem I want to apply is

```
forall n m k, n <= k -> k < m -> n < k
```

If I use the eapply tactic it will give two goals

```
n <= ?k
?k < m
```

Which is an existential problem, i.e. it is equivalent to

```
exists k. n <= k /\ k < m
```

(I should have said "as per your example *goal*".) Patrick, I think you are being too imprecise: what are you `eapplying`

that theorem **to**?

Say the goal is the last one I have proposed: `n, a : nat |- n < n + S a - 1 + 1`

, which is equivalent to `forall n a, n < etc.`

.

If you `eapply`

the theorem to the conclusion of that goal (modulo the typo in the theorem statement), what you get rather is equivalent to: `forall n a, exists k, n <= k /\ k < etc.`

, so you have "introduced" an existential premise.

Not per chance those are called *existential* variables...

Julio Di Egidio said:

Pierre Courtieu

"Core language" is ambiguous here. The documentation splits Gallina

I'd have said the opposite: there is a chapter titled "Core language" I have been referring to, not so much "Gallina" which I had a hunch I should not mention at all... :) And I'm still of the advice that making that and such distinctions is crucial, whether Coq (or the refman) has succeeded at that or not...

Sorry, @Pierre Courtieu, I have misread that "splits" as "spits", so I have missed your point. Thanks for explaining that Gallina = Core + Extensions.

But the "Core language" of Coq usually designates "kernel terms", which have no concrete syntax.

That (what a "kernel term" vs just a "(Gallina?) term" might be, indeed what "the kernel" exactly even is), as well as many other things (e.g. where type inference fits), I still do not know: but I wouldn't want to hijack this thread (too much)...

I think it is quite right to consider that kernel terms do not have a concrete input syntax in Coq. For good reason. Coq's kernel is built around a "safe environemnt", i.e. a map of type roughly `name -> (terms*type)`

. All functions modifying this map are carefully designed to be safe: terms are added only after typechecking validation.

The user only writes "pre-terms" (generally with notations, implicits, missing type annotations, etc, but you can disable all these), and the whole interpretation machinery of Coq builds kernel terms (or fails to do so).

Pre-terms are unsafe. It may happen that the term built from a pre-term is not the one you thought but nevermind, if it is accepted then it **has a** type and can only be used with this type.

The interpretation machinery is rather complex and I don't know it very well. One of the interpretation phase is called pre-typing, and I think implicits filling and type inference occur during this phase. I don't know the exact interleaving of all the phases.

@Pierre Courtieu

The user only writes "pre-terms" (generally with notations, implicits, missing type annotations, etc, but you can disable all these), and the whole interpretation machinery of Coq builds kernel terms (or fails to do so).

Thanks. I think I am with you that far: I am now thinking about it as the syntax of the language (Gallina) vs its operational semantics... Is it at least sensible/correct to think that the Core language is (exactly?) coextensive with kernel terms/operations?

I don't really know what the word "coextensive" means :-). If you mean "can any well-typed kernel term be built from gallina?". I think it is clearly the purpose of the Core language but I don't know if this is true or if some strange term are not definable that way (excepting limited memory problems).

Operational semantics is technical terminology, at least as far as software is concerned... :-P

The answer to my question is yes, except for the "exactly" part which I'd still have to verify:

<< those features are translated into the core language (the Calculus of Inductive Constructions) that the kernel understands >> << the kernel is a type checker >>

<< This separation between the kernel on one hand and the elaboration engine and tactics on the other follows what is known as the de Bruijn criterion >>

https://coq.inria.fr/doc/V8.19.0/refman/language/core/index.html#core-language

Last updated: Jun 23 2024 at 03:02 UTC