I always find my self in a position where Whatever property I need cannot be proven directly, and instead I always need to have an additional layer of genralization for instance (ignoring the details of what is being proved):

```
Lemma scoped_make_pim m n (p: ScopedContext n) (T: Term):
Scoped T (m + n) ->
Scoped (@make_pi (scoped_context_to_pi_order p) T) (m + n - scoped_context_size p).
Proof.
(* stuff *)
Qed.
Lemma scoped_make_pi n (p: ScopedContext n) (T: Term):
Scoped T n ->
Scoped (@make_pi (scoped_context_to_pi_order p) T) (n - scoped_context_size p).
Proof.
have := (@scoped_make_pim 0 n p T).
by rewrite add0n.
Qed.
```

An observation I made was that in this particular case, ScopedContext is inductively defined as `nat -> Type`

And at some point this `n`

changes to `n.+1`

so I need another variable to absort the `.+1`

part. But this pattern is more common in my code than I would like it to be. Is there a fundemental reason why I end up having to prove those general lemmas which I won't use, and is there a general guidelines to restructure my definitions to avoid that ?.

I don't find that situation shocking. Induction sometimes requires more information than the theorem you ultimately care about.

One common case is where your function wraps an auxiliary recursive function (like it happens when using accumulators in the recursion); then your induction will have to be about the recursive function, not the wrapper

At least that's necessary in "follow-your-nose" induction: if a function is recursive on argument X, simple proofs about its properties will proceed by induction on X. I'm not sure I've seen this described, and it doesn't always work, but it's some sort of "baseline" strategy.

OTOH, "strengthening the induction hypothesis" *is* a well-known strategy, and you can google that. For instance https://mathoverflow.net/q/31699

https://www.ma.imperial.ac.uk/~buzzard/xena/alectryon/document.minimal.html is interesting, because it shows an alternative: fixpoint proofs. (Which seems to relate to one of the answers). I think such proofs can be somewhat fragile, but I wonder if they might be easier to discover and refactor

I think this (i.e. follow the function's structure) was more or less folklore already when we wrote this paper about it in TPHOL'2002 http://cedric.cnam.fr/~courtiep/papers/barthe-courtieu-tphol2002.pdf This is the idea behind `functional induction`

and Equation's `funelim`

. But generalization has always beed considered "human intervention", but probably people are now looking for ways to have this suggested by deep learning techniques.

I am aware of the follow the induction/ follow the recursion approach. But this case is neither. Sometimes I get the feeling, If everything needs generalization, then maybe I am doing things in the wrong way

The generalization stuff is really standard, but maybe if you show `ScopedContext`

and `Scope`

we can see something ? For instance maybe Scope could itself should be about `<=`

?

Well ignoring the previous lemmas cause I did some tweaking to data structures ... but scopedcontext is defined as follows:

```
Inductive ScopedContext: nat -> Type :=
| Cempty (n: nat): ScopedContext n
| Ccons (n: nat) (t: Term) (Γ: ScopedContext (S n)): Scoped t n -> ScopedContext n.
```

and Scoped itself, I defined it so it is easy to use on terms:

As follows:

```
Definition Scoped t (n: nat) : Prop := forall σ, t.[upn n σ] = t.
```

That is to say if term is scoped then shifted substitutions do not affect it.

The way I defined scoped is exactly how I wished to use it when is in assumptins. I would just want to use it to rewrite stuff.

Does `Scoped t n`

imply `Scoped t m`

for `m >= n`

, or something like that?

yes that is actually a lemma I had:

```
Lemma scoped_lt {T: debruijn} (t: T) a b:
a <= b -> Scoped t a -> Scoped t b.
```

It handly but I cannot see why is that related in this case

I am not sure if there is a nicer way to define scoped, that gives me the same rewrite capability, but easier proofs, but I had to come up with this definition myself.

why not define it as

```
Inductive ScopedContext (n : nat) : Type :=
| Cempty : ScopedContext n
| Ccons (t: Term) (Γ: ScopedContext (S n)): Scoped t n -> ScopedContext n.
```

? Not sure it would help with generalization but it should make destruct / match nicer to use on it

you are absolutely right, It was one of the earleir iterations where n had to be index, I guess another round of simplification.

and fixing the wierd order of `Scoped t n `

and Γ

Did you actually manage to prove `scoped_make_pim`

? Now that I think about it, the `n - scoped_context_size p`

doesn't look right. Or maybe I'm mis-guessing what `make_pi`

does.

now this property is actually wrong, As previously mentioned I had to rewrite ScopedContext differently to fix other issues , so the new props look like:

```
Lemma scoped_make_pim m n (p: ScopedContext n) (T: Term):
Scoped T (m + n + scoped_context_size p) ->
Scoped (@make_pi (scoped_context_to_pi_order p) T) (m + n).
Proof.
(* stuff *)
Admitted
Lemma scoped_make_pi n (p: ScopedContext n) (T: Term):
Scoped T (n + scoped_context_size p) ->
Scoped (@make_pi (scoped_context_to_pi_order p) T) n.
have := (@scoped_make_pim 0 n p T).
by rewrite add0n.
Qed.
```

I am sorry for any confusion I have created, I was working on the code, all while working on the same codebase

Now that it's fixed, you actually don't need the generalization. `scoped_make_pi`

is provable directly by induction on `p`

.

oh you are totally right, It was one of the elder version of ScopedContext that had this problem, I had to go through many rewrites!

I will have to keep looking for another case where generalization is necessary and come back with follow up here. I don't I think Iwill change ScopedContext, aside from Gaetan's sugestion so it becomes:

```
Inductive ScopedContext (n: nat): Type :=
| Cempty : ScopedContext n
| Ccons (t: Term): Scoped t n -> ScopedContext (S n) -> ScopedContext n.
```

You can avoid some dependencies by also indexing the `ScopedContext`

with the final number of free variables (instead of only the starting number). That way you can get rid of arithmetic operations.

That was the old implementation, but then terms were placed in reverse order, and that made some induction proofs harder. Unless I misunderstand what you are proposing

```
Inductive ScopedContext (n : nat) : nat -> Type :=
| Cempty : ScopedContext n n
| Ccons {m} (t : Term) : Scoped t n -> ScopedContext (S n) m -> ScopedContext n m
```

Oh yes, that is almost what I had in the beginning, expect for it was reversed, one parameter signifying the start, and another signifying the end. I cannot easily desribe why proofs didn't go through, but they just didn't and I had to eliminate one of the two ends, then I had to put things in correct order.

Is there a good reference for binding in dependent types? Something like POPLmark?

I think there is probably no canonical up-to-date source. The Autosubst 2 paper related work has a brief overview as of 2018: https://www.ps.uni-saarland.de/Publications/documents/StarkEtAl_2018_Autosubst-2_.pdf

it seemingly boils down to: use (P)HOAS or locally nameless or whatever a code generator gives you

I was just using a locally implemented autosubst. It was nice and everything but it did not provide automation for the kind of debruijn equalities you get in indexed families (maps and folds ...etc)

Autosubst 2 also supports indexed families

it doesn't give you general maps and folds, but you could add them on top, and these days it seems you're mostly using them to implement renamings/substitutions anyway?

Personally, I prefer autosubst 2 (for de Bruijn with parallel substitutions) over locally nameless and over PHOAS. The last one requires axioms and the tooling for it doesn't seem maintained, and for what I (used to) do (logical relations) you need parallel substitutions anyway.

Autosubst decision procedure for binding lemmas is also a great upside, and the structure you get is more robust (since it coincides with the category of contexts).

OTOH, locally nameless does have more tooling in Coq I think.

the only code generator for locally nameless I am aware of is Ott + https://github.com/plclub/lngen

so in other words, the binding tool support situation is depressingly bad, we at least have Autosubst 1 on the way to the Platform: https://github.com/coq/platform/issues/351

I haveI have non empty list(k::t) and two hypothesis. H1: k <= list_max (k::t) & H2: list_max (k::t) <=k.

From these H1 and H2 , is it possible list_max (k::t) =k?

Last updated: Jun 23 2024 at 04:03 UTC