Hi, I wonder what literature exists about the underlying metatheory for the automatic generation of Boolean equality for decidable inductive types. In particular, I realized recently that the syntactic translations developed in Nantes (parametricity, forcing, etc.) can be modified relatively easily to produce a Boolean equality working for the whole Calculus of Inductive Constructions and I wondered what is already known about the subject?

there's https://hal.inria.fr/hal-01897468v2

Gaëtan Gilbert said:

This looks very impressive! So many lemmas automatically generated, that was one of my dream! What is the exact status? What is the exact extent? From a naive perspective, it seems worth to be compared/merged/unified with what `Scheme`

is (modestly) doing and what `Derive`

does in Equations. I'm sure everyone would benefit _a lot_ from a unified, centrally documented system of automatic generation of schemes.

(On my side, I studied Boolean equality and I will make a draft PR making their generation by `Scheme Equality`

a bit more robust and general but this might well be actually subsumed by what elpi already does.)

Together with others in Sophia we are improving it, in particular to generate smaller proof terms. Hopefully this will also impact inversion.

We also hope to cover sigma typrs with an irrelevant component.

Would it make sense to turn all scheme commands into plugins?

The only issue I can foresee is the calling of the induction schemes after a definition etc.

But we should be able to make a general hook for plugins to run vernac after these commands?

That way plugins like equations can automatically call Derive by using the hook

and our scheme plugin could be something more modular that can easily be replaced by better versions in the future

+1 on moving this to elpi, also to make them more maintainable

you have three options then, in my view:

- migrate Coq-Elpi to Coq's repo alongside Ltac2, or
- put all Scheme stuff into its own repo that depends on both Coq-Elpi and Coq (akin to Bignums, I guess)
- put all Scheme stuff in Coq-elpi's repo

All options likely lead to dependency management work...

the following are Elpi's current dependencies, which I think are not trivial...

```
"ocaml" {>= "4.04.0"}
"camlp5" {< "7.99"}
"ppxlib" {>= "0.12.0"}
"re" {>= "1.7.2"}
"ppx_deriving" {>= "4.2"}
"ANSITerminal" {with-test}
"cmdliner" {with-test}
"dune" {>= "2.2.0"}
"conf-time" {with-test}
```

I personally advocate for merging the parametricity plugin into Coq and use that everywhere

IMO it's better to do stuff in OCaml for this kind of low-level manipulation (this is not a heuristic tactic, this is a syntactic translation)

It seems to me that unless you're a Coq dev, there's a bigger barrier to entry to OCaml than to higher-level languages. I don't think anything in elpi restricts it necessarily to heuristic tactics.

At the moment, I don't see much advantage in merging coq-elpi, the deps are huge and it is still "incomplete" (does not cover full CIC)

Derivations in Elpi, are, so far "ltac free". In some cases we are experimenting with calling tactics, but that is mostly to speed up experimentation.

even if elpi is not done yet, I stand by the long-term vision (I “grew up” with Guy Steele’s “Growing a Language”)

@Enrico Tassi my point was not about Ltac, rather that the parametricity interpretation is a fully deterministic, total process

About hooks, I tend to agree with that. In Elpi I have quite some code so that I can "parse" stuff like `derive Inductive F params := K1 .. `

and completely take over the declaration, eg open a module F and place in there all the derived stuff. If that was a sort of protocol and one could just set a hook to be called once the inductive is declared, and before the module is closed...

I know, and I'm very happy it is ;-)

but then tying it to the kernel OCaml AST is a strong static guarantee

whenever somebody adds a constructor or plays dirty tricks with universe instances you'll see it

Oh, you mean, is it fully deterministic by comparison with... what? Scheme Eq?

Hum, but that is what CI is for, no? It has a test suite IIRC

by comparison with any horrible Proper typeclass search, say

I’m also confused, is the point that logic programming is overkill for a syntactic translation?

yes

ah ah, define logic programming then ;-)

I don't want to rant against LP here anyways

I agree backing serves no purpose

I'd just like to advocate that serious syntactic intepretations should have the lowest impedance

serious as in, we the coq devs are commited to maintain it

impedance for who?

and low impedance means the least amount of abstractions w.r.t. the kernel AST

the coq devs

Ok, if it is about speed and maintenance, I agree with that. But for example, there is not just "one parametricity translation"

Counterpoint: there are few Coq devs, so they should focus on making themselves unnecessary.

it's not so much speed, rather I am concerned about abstraction layers getting in the way of nasty low-level details

typically, universes

I consider that a feature ;-)

Nobody wants to see how universes are implemented in Coq

indeed

so it's the burden of the kernel maintainers to design a translation that behaves correctly wrt universes

And you know it better than me

otherwise you'll see spurious universe constraints pop up

and in the long run, the Metacoq-vs-Iris catastrophe is in the tubes

for incomplete / heuristical program interpretations I am less concerned

I see what you mean now about universes, and I've little experience with explicit universe binders (since I don't have them yet, but maybe they will be soon, since Enzo is helping).

I got the metacoq reference but that’s another thread :-)

as a user, it’s a disgrace that `derive equality`

is a hardcoded inflexible builtin and nobody has yet patched it to produce boolean equality. Using a good higher-level language would make it easier for people to maintain/extend/customize it.

that’s what I like about the ideas in Lean 4, and about elpi

I do feel pain for our poor users

Indeed, this is why I'm saying that there not just one parametricity translation...

but that one is not total, because users can tweak stuff, so yeah I agree with you

I’m not married with logic programming, but it’s clearly a sensible DSL for an elaborator that does backtracking, and I’d rather have 1 DSL (which can do backtracking and LP) than 2.

we've been stuck for 20 years with broken induction principles for nested inductive types...

and even if you fixed those, that wouldn’t help users that need their own variations anyway

@Paolo Giarrusso I don't disagree with you on a high-level point

my claim was about *serious* syntactic intepretations

anyways

please continue developing it in elpi, that's solving a lot of pain points indeed

for everybody confused about metacoq-vs-iris see https://coq.zulipchat.com/#narrow/stream/237656-Coq-devs.20.26.20plugin.20devs/topic/Universe.20incompatibility.20from.20MetaCoq.20and.20Iris

The discussion goes in several directions and we'll certainly need to have a technical discussion at some time (for instance on the exact choice of parametricity translation used and on the exact coverage of each algorithm) but when I realized the amount of features already implemented in Elpi (and secondarily of Equations), my point became more about how to propose a unified front-end over which we can continue to let things grow in coverage of the CIC and in diversity of translations, integrating contributions from various origins.

I will hereafter rely on the `Scheme`

syntax but it can be any other syntax. The question is: can we imagine a system of registration of translations by plugins which provides flags of the form `Set Foo Scheme`

to automatically generate a scheme for each new definition as well as commands of the form `Scheme id := Foo for bar`

, where each new kind of scheme `Foo`

possibly comes with flavors (e.g. for Boolean equality which I studied, it can be of the `forall A B eq_A eq_B, ...`

or `forall A eq_A B eq_B`

forms, or it can assume proof irrelevance and drop arguments in Prop, or it can support the full cumulative hierarchy of universes or not, etc.)?

As discussed above, delegating the actual implementation to plugins would allow more flexibility in the implementation language they use, and at the same time provides a more distributed responsibility in maintaining the code, while still retaining a unified view and still retaining the possibility to eventually propose new algorithms or better implementations of existing algorithms.

one of the problem I see extending the current approach is that we can hit diamond dependencies with scheme registration quite easily

so you'd need to ensure that the plugins used to generate one part must coincide globally or something

You mean dependency e.g. of `eq_dec`

into `beq`

? I was thinking having several variants for a same scheme. For instance, there are already several possible variants for `_beq`

schemes. So, yes, this comes with a problem of naming univoquely a scheme in all its small details.

To come back on the variant of parametricity that produces Boolean equality, is it documented somewhere? Is it supposed to be folklore? @Enrico Tassi, your paper and the Elpi implementation seem to build Boolean equality independently of the generic parametricity translation, but at the same time there seems to be an implicit statement in the air from the paper, and also from @Pierre-Marie Pédrot that even Boolean equality is a particular case of parametricity (which is the conclusion I also arrived to). If this is not already documented somewhere, this is certainly worth making this more well-known.

I use unary parametricity in order to get deep induction principles which in turn are necessary to prove the correctness of equality tests compositionally and keep their proofs opaque.

Somewhat related question from reading the paper: are "no confusion" principles the same as inversion principles / type-specific path-induction principles? (It seems like decidable equality is the fixpoint of applying the inversion principle recursively?)

~~(At least in Agda papers), "no confusion" are more specific: they asserts that different constructors have disjoint ranges. "Inversion" should also include that each constructor is an injective function.~~

Sorry I misremembered, you're right; https://dl.acm.org/doi/pdf/10.1145/2628136.2628139 has this, which covers both disjointedness and injectivity

image.png

Last updated: Oct 13 2024 at 01:02 UTC