I am yet to approach circumstances where small-scale reflection would be helpful, other than being able to use `rewrite`

with lemmas.

I suspect it is me not knowing how to use them properly.

Could you explain how to use small-scale reflection effectively? Explanation with a concrete example would be great!

Maybe is it linked to the fact that you are using a lot of real numbers / abstract ring / other algebraic structure not offering effective computation. Small scale reflection is probably more useful when manipulating things like natural numbers / integers / booleans / other things equipped with computation.

To take an example in a nearby stream, `(2 + n%:R != 0)%R`

is non trivial to prove whereas `(2 + n != 0)%N`

simply reduces to `true = true`

by computation.

I see. I am curious, does the computational benefit also translate to structures like finite groups? Saw lots of work done towards that part in math-comp libs.

If you manipulate a finite group of a given order maybe but if it is more abstract (a finite group of some order n : nat) probably not that much. However you may still manipulate natural numbers and boolean in the process and enjoy their computational behavior.

I see, thank you!

abab9579 has marked this topic as resolved.

abab9579 said:

I am yet to approach circumstances where small-scale reflection would be helpful, other than being able to use

`rewrite`

with lemmas.

I suspect it is me not knowing how to use them properly.

Could you explain how to use small-scale reflection effectively? Explanation with a concrete example would be great!

I've been thinking about this too although i'm still a neophyte at SSReflect. Here is what I have thought so far.

When the challenge is to prove a theorem about a concrete object, such as the list [3 :: 4 :: 6 :: 8], it is easiest to prove the theorem when the definition of the property is given by a computable function, i.e, the function "monotonic L" which goes through the list two at a time to compare the elements in the list and returns true if all elements of the list are in increasing order. To prove that this list is monotonically increasing, the proof is one single command - you apply the function and observe that it returns true. It would be much more difficult if "monotonic" was defined as an inductive type, and so was "leq" and so on, all the way down, one would have to manually prove that 3 <= 6 by applying the inductive constructor for leq so many times. This could be scripted in Ltac or otherwise automated with "auto" but it is still unpleasant.

One does not have to have a *fully* concrete representation of the thing in order for small-scale reflection to be useful. It is already useful when there is enough concrete information for the computation to proceed at least one step. If we define

```
n <= m :=
match n with
| O => true
| S n' => match m with
| O => false
| S m' => n' <= m'
```

then theorems like "n + 3 <= m + 3 <-> n + m" are validated immediately by simply letting the computation of n + 3 <= m +3 run as far as it can before it fails to progress further. Similarly if it is known that P is true, we can simplify "P && Q" to "Q" just by letting the computation of && run as far as it can with the given information.

However, proving things about monotonic functions is difficult then because one must reason by appeal to the structure of the algorithm. For example, proving the sum of monotonic lists is monotonic, or that the composition of monotonically increasing functions is again monotonic, etc, all of these would have to be proved by induction on the length of the list because the function monotonic is defined using the list. So we want a more elegant logical formulation, "forall x y, x <= y -> L[x] <= L[y]", this is much easier to prove more abstract stuff about.

Another benefit of reflection is that dependent types seem to get really complicated and messy in practice. When I first started out in coq I constantly got dependent type errors when I tried to rewrite things because I had not sufficiently abstracted or generalized the arguments to my functions or predicates. Using Booleans seems to cut down on the complexity of dependent or inductive types. For example, say I want to prove n <= m -> P(n, m) and I want to reason by cases on whether n = m or n < m. I might have proven a theorem where (n <= m) -> (n = m) + (n < m), where all of these are defined inductively. But if I am building a proof of P n m that depends on the proof of n <= m, I cannot just freely swap it out with a term of type (n = m) + (n < m), these are not the same type and I will run into errors. Here, it seems we are less likely to encounter complex dependent type errors if we reason by cases on the Boolean value of (leq (S n) m)) because this doesn't involve trying to change the type of the assumption. Similarly to the example above, if P is known to be true, then simplifying P && Q to Q doesn't change the type of this assertion if it's Boolean, but we may not easily simplify P /\ Q to Q in a dependent type situation if our the type of our goal or other assumptions depends on the product type. I used to frequently run into problems with the tactics `pattern`

`generalize`

, and `destruct`

because the type of my proof-term depended in a nontrivial way on the term I was trying to generalize. Channeling things through booleans decouples that dependency.

Closely related to this is that because of the simplicity of Booleans, subtypes defined using Boolean-valued predicates { x : A | P(x) = true } automatically satisfy good properties, for example UIP holds for bool and so two elements of {x : A | P(x) = true} are equal iff they are equal as elements of A, and you can prove this in general with one single lemma about Boolean subtypes regardless of A and P. (This is val_inj in the library.)

I think there's a lot of literature on rewriting systems as well so if you're scripting or automating proofs you can take advantage of this substantial theory more if everything you have is rewritable. In some sense you are reducing the very very complex logic of Coq down to mere equational logic between Booleans

(while adding many new function symbols to that logic...). I would imagine that if you have a lot of theorems that are all of the form `forall x1 x2 ... xn, t1 = t2' a lot of equational logic becomes applicable.

Let me also give one use of Boolean reflection in the library I find really elegant.

Say I have a goal of the form:

```
if (x < y) then ... (x >= y) && P(x, y) -> ... (x < y) ... (min(x, y)... else if ( x < y ).. F(max(x,y), z)
```

Here the point is to illustrate that the goal depends in multiple places on the Boolean truth values (x < y), (x >= y) and the numerical values min(x,y) and max(x,y).

There is a theorem in the library, leqP, where if you use the tactic "case: (leqP x y)" in this situation, it will immediately generate two subgoals depending on whether x < y or x >= y. In both cases it will add the assumption x < y (resp. x >= y) to the context, and all references to x <y will be replaced with the constant "true" and all references to x >= y will be replaced with "false" (respectively, x < y becomes "false" and then x >= y becomes "true"). Furthermore min(x,y) will be replaced with x and max(x,y) will be replaced with y (respectively, min(x,y) will be replaced with y and max(x,y) will be replaced with x). All this in a single command! Then both goals will immediately simply automatically by simplifying the pattern matching on "if true" and "if false". Something similar is perhaps doable without the use of small-scale reflection but the translation of the idiom seems like it would be difficult.

Last updated: Jul 25 2024 at 16:02 UTC