Hi, I feel a bit dumb but I must confess that I have difficulties to remember from one time to the other the names used in the universe API, e.g. to remember that LSet
/LMap
is about levels, that Constraint
is not about one constraint but about a set of constraints, etc. That push_context
/push_context_set
is not about typing contexts but about universe contexts, etc. Am I the only one? Would it be well perceived if I propose a renaming of those into, say LevelSet
/LevelMap
/Constraints
/push_universe_context
/push_universe_context_set
?
Enrico Tassi has marked this topic as resolved.
Enrico Tassi has marked this topic as unresolved.
Enrico Tassi has marked this topic as resolved.
Enrico Tassi has marked this topic as unresolved.
I wanted do put a "thumbs up" but I failed miserably from my phone
OK for me but instead of LevelSet it should be Level.Set
In the same way, I'm regularly uncertain about the difference between UContext
and ContextSet
. The first one is about a concrete universe subgraph instantiating an object polymorphic over an abstract universe graph (where the universes are ordered) and the second is about a concrete universe subgraph enforced by the typing of a monomorphic object (where the universes need not be ordered), is that correct?
If yes, what about calling them UnivInstance
and UnivContext
, or maybe UnivPolymorphicInstance
and UnivMonomorphicContext
?
univ is already in the name since it's from univ.ml
ucontext could be renamed to ConstrainedInstance
aucontext to AbstractContext (it has abstract constraints + names for printing)
I'm not sure if contextset is always monomorphic
Indeed, good point @Hugo Herbelin , when I submitted https://github.com/coq/coq/issues/6414 I also did a little list for these kind of issues, which I found confusing. I thought I had submitted it as an issue but it seems I did not.
Oh, so my issue was more like, for the following objects:
- Ustate.t
+ UnivNames.universe_binders
- UState.universe_decl
- Entries.universes_entry
- Univ.ContextSet.t
- Univ.UContext.t
- Name.t array * Univ.UContext.t
What should the "variable naming" scheme we should follow in ocaml, that is to say, when to use udecl
uctx
, etc...
we never have a ucontext without the name array so it would make sense to merge the names into ucontext btw
Yup; also keep in mind that the above list is super outdated [from the time we were on the 6xxx issue numbers]
@Gaëtan Gilbert: Combining your two comments, are you saying that we should replace Instance
with
module Instance : sig
type t = (Name.t * Level.t) array
...
and UContext
with:
module ConstrainedInstance =
struct
type t = Instance.t constrained
...
where the new Instance
has the names?
@Gaëtan Gilbert : for instance, in Section.data : (Instance.t * AUContext.t) entry_map
, wouldn't it be simply a ConstrainedInstance.t
in the above sense (that is, morally (names * levels) array * constraints)?
no
instance doesn't change
ucontext becomes names * instance * constraints (calling it ConstrainedInstance then becomes less nice)
section abstracts universes early which is why it stores both instance and aucontext, but it could be delayed instead to store names * ucontext (just ucontext in proposed change)
OK
Did I understand correctly that AUContext.union
is tricky. It is the concatenation of declared universes + union of constraints but it seems to rely on the invariant that these constraints are defined _in advance_ on the concatenation of declared universes.
yeah that looks pretty evil
ucontext becomes names * instance * constraints (calling it ConstrainedInstance then becomes less nice)
Don't you prefer to have a (Name.t * Level.t) array
over a Name.t array * Level.t array
, so that we don't need to check they have the same size?
it may make sense to move discharge_abstract_universe_context into univ and stop exposing union
On the other side, the abstraction can maintain the invariant that they have the same size...
Hugo Herbelin said:
ucontext becomes names * instance * constraints (calling it ConstrainedInstance then becomes less nice)
Don't you prefer to have a
(Name.t * Level.t) array
over aName.t array * Level.t array
, so that we don't need to check they have the same size?
instance is also the thing that goes into term nodes, this should not have names
instance is also the thing that goes into term nodes, this should not have names
OK
I see that uState.universe_context
builds a UContext
but w/o necessarily naming the universes that the users did not name themselves.
But then compute_instance_binders
tries to complete the list of names.
There seems to be a use of UContext
and ContextSet
which opposes an array representation and a set representation. So maybe, the terminology should be OrderedContext
vs UnorderedContext
??
Or ContextArray
vs ContextSet
(keeping the name ContextSet
)??
perhaps
Somehow UState.universe_context
and Univ.ContextSet.context
, as well as check_univ_decl
and univ_entry
+check_mono_univ_decl
seem to do pretty similar things. Do you think that this could be simplified?
I only realize now, the main difference is that monomorphic objects basically work with a graph over an unordered set of universes while polymorphic objects need to order the universes and thus work over an ordered set of universes.
Is this really important?
Couldn't we work systematically with a UContext
and stop making a difference everywhere between Monomorphic_entry
and Polymorphic_entry
since the difference is not about coming with a graph of universes which in one case is represented unordered and in the other case ordered but only on whether there is eventually a generalization over this graph or not?
I don't like having a meaningless order
It is not necessarily meaningless. For instance, if I do:
Definition T@{i j} := Type@{i} -> Type@{j}.
Set Printing Universes.
Print T.
I may be happy as a user to see the universes printed in the order I gave, even though it is monomorphic.
But anyway, even keeping unordered universes in the monomorphic case, I suspect that e.g. declare.ml
would benefit of having all the treatments of universes following a common path.
doubt
As another example, in Global
, there is:
push_section_context
which goes to Section.push_context
and Environ.push_universe_context
push_context_set
which goes to Section.push_constraints
and Environ.push_context_set
Environ.push_universe_context
and Environ.push_context_set
are doing essentially the same thing (up to allowing merging twice the same constraint).push_mono_universes
and push_poly_universes
, or push_universes ~poly
.When I'm saying that I would have understand the code faster, I'm talking about months, or even years. Already last year I stumbled over the code about universes and I capitulated because it was too much spaghetti code, with everything called uctx
(though with different types), hiding that the real difference was just (as I only understand it now) whether it was eventually intended to be global universes or universes generalized over the given object.
the section side is very different
Yes, the section side is very different.
This is actually my point: we maintain a difference all along the way, but the real differences arrive quite late, in the kernel.
And all along a terminology is used which does help (at least did not help me) to understand what is important to focus on.
Somehow, my perception is that the question of propagating universe names vs lately recovering names (as done early in DeclareUctx.name_instance
in the poly case vs as done lately for printing in Level.pr
in the mono case, even though both rely at the end on the same Level of UGlobal.t
) is secondary, and not worth maintaining so many paths.
Anyway, that's an "external" viewpoint, from someone reading the code afterwards.
You may have a different view.
after abstract the namesare not in the lvel.t so we have to get them early
The "local" names could be kept/computed early also in the mono case?
waste of computing time though
and they wouldn't be in the same place as the poly names so it doesn't unify code
My initial message was motivated by understanding how complicated it would be to have local objects (section variables/local-definitions) follow the same path as global object (parameters/definitions), differing only by a registration in the tables of local objects.
Trying to unify the code between declare_variable_core
and define_constant
is not easy.
I don't know if @Emilio Jesús Gallego Arias has ongoing PRs on this topic.
None relevant I think @Hugo Herbelin , indeed I struggled a bit when trying to make the declare path more uniform, and I'd say we can still clean code quite a lot on that area, but I ran out of cycles last time so nothing going on these days.
I'd be very happy tho to try to continue making the API more resilient later on.
Stupid question in passing: is there an essential difference between a Monomorphic
declaration and a Polymorphic
declaration quantified over zero universes?
Hugo Herbelin said:
Stupid question in passing: is there an essential difference between a
Monomorphic
declaration and aPolymorphic
declaration quantified over zero universes?
the polymorphic declaration may have constraints (over global universes) which are only added to the graph when it's used
eg this is possible:
Universes u v.
Polymorphic Definition u_lt_v@{|u < v} : Type@{v} := Type@{u}.
Polymorphic Definition v_lt_u@{|v < u} : Type@{u} := Type@{v}.
Fail Check u_lt_v -> v_lt_u -> True.
if there are no constraints then no fundamental difference
Interesting.
Is this "lazy" behavior on purpose or incidental to the architecture of polymophic declarations?
If on purpose, why not to do the same for monomorphic declarations?
see https://github.com/coq/coq/issues/9359
the mechanism has only been added to coq with polymorphism so was probably never considered with mono
it could be interesting but probably mildly costly and comes with subtle incompatibilities
also this stuff works badly with stm style delayed proofs
Another question about universes: is it for historical reasons that PrivatePolymorphic
takes an int * Univ.ContextSet.t
rather than an AbstractContext.t
(ex-AUContext.t
)? Would it make sense to align it on Polymorphic
so as to share more code?
no
For instance, I'm looking at Cooking
and the code for PrivatePolymorphic
in cook_constr
is in spirit the same as the one in discharge_abstract_universe_context
, so the subtle technical work could be shared.
it is not
the private universes are not Vars
Ok. Morally, is it a polymorphic universe quantification of this form: \u1...un.\G:AcyclicConstraints(Var 1,...,Var n,globals).(pu1,...,pum,pG:AcyclicConstraints(Var 1,...,Var n,pu1...pum,globals) such that pG|(Var 1,...,Var n) subset G) with (n,pu1,...,pum,pG) the private context?
(Just to understand if discharge_abstract_universe_context
could be used to simplify code paths in cook_constr
)
I can't read that
I mean: a private context differs from an abstract context in that it does not contain only a graph G over Var 1 ... Var n and the global mono universes, together printing names for Var 1 ... Var n, but also another bunch of private named universes pu1 ... pum, with the graph being over Var 1 ... Var n, the global mono universes, and also the pu1 ... pum. (Additionally, the printing names for Var 1 ... Var n, i.e. the Name.t array
of AbstractContext
is lost, keeping only the size of the array.)
the names are not lost, they're in the abstractcontext attached to the constant
Ah, ok, PrivatePolymorphic
actually comes in addition to Polymorphic
, I see.
So, assuming that the private names are really fresh (as ensured by the global counter), discharge_abstract_universe_context
could be used in cook_constr
!
Another question in passing: How easy/useful would it be to give the ability for declarations to be partially monomorphic and partially polymorphic, i.e. to merge Monomorphic
and Polymorphic
into one, with a Name.t array
of size n
for the quantified part, a Level.Set.t
l
for the global monomorphic universes introduced by the declaration, and a graph over Var 1
, ..., Var n
and l
?
For instance, would it help to reduce template polymorphism to the regular universe polymorphism?
This would also probably allow to simplify code paths.
https://github.com/coq/coq/pull/9380
I temporarily have to go, so currently missing time to understand the issues with 9380. Were there technical problems? Was it too ambitious at that time? Was it stopped by lack of consensus?
Last updated: Sep 15 2024 at 12:01 UTC