I'm trying to write some Coq code with universe polymorphic definitions written using tactics.
I observed that opaqueness makes a big difference when I specify the universe levels that are allowed in a definition.
(* Trying to define a natural number
explicitly indicating that it should not depend on any universe level
*)
Polymorphic Definition t@{} : nat.
Proof.
refine (let x := Type (* generates some universe level *) in 5).
(* complains that some (unessential) universe level is unbound *)
Fail Defined.
(* Optimizes away the unessential universe level ? *)
Qed.
In general, I get into issues where I want to limit the number of bound universe levels in a transparent definition (for simplicity/readability), but some tactics that I use in the script (apply:
for instance) artificially allocate "unessential" universe levels.
Are there known workarounds to limit the universe levels allocated by elaboration/typechecking ? Or ways to optimize the universe levels of a proof term at Defined
time ?
they're not really unessential, you might recover them by unfolding the definition
here it does not matter if you normalize, but normalizing is not realistic
But its Qed so you cant unfold it
nope, Qed proofs can't be unfolded which is the reason the universes are abstracted away
(they do not appear in the type and are quantified, so we can safely ignore them)
Of course the universe levels will become visible when unfolding in a Transparent definition, but by unessential I mean that I can find some instance for these unbound universe levels that do not modify the constraints on the other universe levels
One way would be to let t have general universes by using t@{+}
and then minimizing it by hand. i.e. t'@{+}
and then t@{} := t' ...
you can also do some normalization here
Ali Caglayan said:
One way would be to let t have general universes by using
t@{+}
and then minimizing it by hand. i.e.t'@{+}
and thent@{} := t' ...
I think that's precisely my problem: with tactics I cannot optimize by hand the term unless I generate the term then use refine
which is terrible for maintainability
yeah if they disappear after reduction you might try your luck with a clever combination of {+} and Eval compute
if they still stick around I think you're out of luck though.
In general I don't think I want to normalize the term, only to instantiate the universe levels inside the term
Hmm, I guess we could have a flag to generate monomorphic constraints for unbound universes...
another example:
Polymorphic Definition foo@{u} : nat. Proof. exact 5. Qed.
Polymorphic Definition bar@{} : nat. Proof. exact foo. Fail Defined. Qed.
the typical trick is
Definition bla0 := ... (* or some tactics *)
Monomorphic Universe Udontcare.
Definition bla@{} := Eval unfold bla0 in bla0@{Udontcare}.
(* never mention about bla0 again *)
Ok... but that does not seem to scale much, does it ?
it scales better than letting the universe count explode
what @Kenji Maillard means is that there should be support for that
(I guess)
it's super annoying to have to explicitly feed some random global variables
For the context, I spent 2 hours on friday debugging coq scripts to track such unbound universes in definitions. In that setting I often had no clue at the start of the process which tactics and definitions introduced these unbound levels.
it is annoying, I just don't know the algorithm to implement so that coq would do it right
For instance, I had to change calls to typeclasses eauto
to explicit calls to each instances which kind of defeat the purpose of typeclasses...
@Gaëtan Gilbert Isn't coq already doing some kind of optimization of the universe levels in the case of opaque definition ?
yes
not really optimization, more classification into a set that needs to be exposed and a set that doesn't
couldn't we have access to this (as a tactic for instance) also for transparent definitions ?
it doesn't make sense for transparent definitions
Set Universe Polymorphism.
Lemma foo : Type.
Proof.
exact Type.
Qed.
Set Printing Universes.
Print foo.
(* foo@{foo.1} =
Type@{foo.2}
: Type@{foo.1}
(* Public universes:
foo.1 |= Set < foo.1
Private universes:
{foo.2} |= foo.2 < foo.1 *)
*)
the universes are still there
Ok, that's interesting, so it's not that the universe levels are instantiated (because it would be consistent to do so) but only that they are hidden away...
Btw, I'm a bit surprised that the following variation fails (but it's probably unrelated)
Set Universe Polymorphism.
Lemma foo@{u} : Type@{u+1}.
Proof.
Fail exact Type.
(* The command has indeed failed with message: *)
(* Algebraic universe on the right *)
exact Type@{u}.
using algebraics is evil
Gaëtan Gilbert said:
it doesn't make sense for transparent definitions
@Gaëtan Gilbert wouldn't it make sense to run the same algorithm on a term and to ask for a substitution of private levels by public ones ?
I guess this would mostly be interesting if the restrictions on algebraic universes were lifted (sure, algebraic universe expressions are algorithmically challenging, but the whole universe system is currently inhumanly challenging).
Last updated: Oct 03 2023 at 20:01 UTC