Regarding your first question, the global feeling is that there is a strong tension between expressivity and efficiency.
Either you quantify everywhere and you get very generic code, at the cost of exponential blowups, or you unify everywhere and you're basically back to monomorphic code.
The current default tries to strike a balance between both, but I am personally convinced that if we want to scale, we need to be essentially monomorphic-like everywhere, and polymorphism must be explicitly asked by the user.
There are other questions involved, like the choice of representation for universes.
For instance, template polymorphism is horrible in terms of typing rules (and probably unsound in the implementation) but it's efficient because universe instances are inferable
As for your second question, normally lists are template polymorphic so you'll get refreshed instances every time you use them
To generate fixed universe constraints with lists you need to perform a partial application
Those uses of lists will indeed add monomorphic constraints in the global table
how many large Coq projects actually use universe polymorphism at all? I'm sure there are some that absolutely can't live without it, but my feeling is that mostly it's unused or a distraction. Is there an easy way to tell if a project uses non-trivial universe polymorphism or template polymorphism @Pierre-Marie Pédrot?
That would need a definition of non-trivial
Gaëtan Gilbert said:
That would need a definition of non-trivial
yes, I'm basically asking: is there a reasonable definition of / criterion for non-trivial use of universe polymorphism?
@Karl Palmskog in the CI only HoTT, UniMath and Mtac2 use universe polymorphism
apart from spinoffs of those devs I don't think universe polymorphism is used much in the wild
in this case, maybe it would be warranted in the future to have the option to select "monomorphic stdlib(2)" (for performance + simplicity)
@Matthieu Sozeau is working on a more aggressive universe minimization algorithm. That should help efficiency.
I was trying to write a category theory library using universe polymorphism and was running into various issues because Coq was creating universe levels where I didn't want to.
even if it's eventually efficient to check polymorphic universes, wouldn't "certifiably monomorphic" give you more stuff? Like iron-clad guarantee that checking a Qed proof in isolation without checking universe consistency never has a false positive?
It's actually the opposite, polymorphic code is better behaved.
The monomorphic set of constraints is a global shared state and it's problematic for a lot of reasons.
There is this funny counterexample where the value of a boolean in parallel mode depends on the order of scheduling.
Also, you never know whether it's legal to link (= Require) two libraries defining monomorphic constraints
Draft on minimization:
https://github.com/coq/coq/pull/11512
Pierre-Marie Pédrot said:
Karl Palmskog in the CI only HoTT, UniMath and Mtac2 use universe polymorphism
Doesn't extlib also provide univ poly definitions?
There's also Local Set Universe Polymorphism.
in two stdpp files, and those are used in Iris and lambda-rust
Also tagging @Janno
@Janno what else (in the wild/in CI) uses uni-poly? People already mentioned Mtac2, stdpp, ext-lib, HoTT and UniMath
I don't actually know. I was actually under the impression that UniMath didn't use universe polymorphism.
https://github.com/UniMath/UniMath/blob/master/Makefile#L96
@Janno that's also my understanding
We'd like to try relying on universe polymorphism in FCSL at some point in near future to simplify some definitions
@Alexander Gryzlov what kind of simplification do you hope to achieve? (My code is rarely made simpler by universe polymorphism. I only enable it when I cannot get the program to work in another way..)
Janno said:
Alexander Gryzlov what kind of simplification do you hope to achieve? (My code is rarely made simpler by universe polymorphism. I only enable it when I cannot get the program to work in another way..)
Basically there's some code duplication in https://github.com/imdea-software/fcsl-pcm/blob/master/pcm/unionmap.v that hopefully would go away when the underlying structure is refactored to make use of UP
@Alexander Gryzlov Do you mean to bypass the duplication explained in that comment ?
I decided on the more complicated design to avoid the universe
jump. Heap values are dynamic, and live in Type(1), while most of the
times, ordinary union_maps store values from Type(0), and can be
nested. If the two structures (heaps and ordinary union maps) share
the same code, they will both lift to the common universe of Type(1),
preventing me from using maps at Type(0), and storing them in the heap.
If that's the case you should be able to limit the use of Polymorphic
objects to that file and handle the necessary universes explicitly, no ?
Kenji Maillard said:
Alexander Gryzlov Do you mean to bypass the duplication explained in that comment ?
I decided on the more complicated design to avoid the universe jump. Heap values are dynamic, and live in Type(1), while most of the times, ordinary union_maps store values from Type(0), and can be nested. If the two structures (heaps and ordinary union maps) share the same code, they will both lift to the common universe of Type(1), preventing me from using maps at Type(0), and storing them in the heap.
If that's the case you should be able to limit the use of
Polymorphic
objects to that file and handle the necessary universes explicitly, no ?
I haven't actually started doing this, but when I do, I'll certainly give it a try, thanks!
I kind of agree with Pierre-Marie, while lists could be made polymorphic and cumulative, we certainly want to keep lemmas/user developments using global universes for now until we have a more reasonable inference story. There are not many large developments using full polymorphism, and they’re mainly category theory/HoTT related at this point I think.
Principal universe inference is currently prohibitively expensive, in terms of generated universes and constraints, because of the mix of cumulativity and polymorphism. One direction to avoid the blow-up is to force users to be explicit about cumulativity, that’s the minimization enhancement I was proposing.
Last updated: Oct 13 2024 at 01:02 UTC