There was a discussion in 13477 about polymorphic generalization over universe variables in sections which, currently, is mandatorily over the whole polymorphic part of the graph of universes of the section. Any objections to trying to implement a polymorphic generalization working by default only over the universes which actually occur?
A tentative way to proceed is the following:
*_entry_secunivs
in the different Entries.*_entry
telling which subgraph of the universe graph of the section has to be keptdeclare.ml
fills this field by calling Vars.universes_of_constr
(to collect used universes of a constant) and UState.restrict
(to extract the subgraph of the universe section, as given, afaicu, in Section.all_poly_univs
)*_secunivs
field in the different Declarations.*_body
and Cooking.result
to propagate this information in the environmentsection_segment_of_entry
, use the latter field to intersect the universes of the last section with the subgraph relative to all sections for this constant (at the same time as we do it for section variables)Do I miss something subtle at the current stage? What do you think?
Incident questions:
Proof using
for universes? Could for instance the syntax @{i j}
in a section be reused for telling which existing universes to use?*_entry_secctx
=None
in and the code of check_section_variables
)?This looks like looking for trouble kernel-wise with little benefit. I'd like to see an actual use case before deploying such a heavy machinery.
It's not even clear it will be efficient since you'll have to compute transitive closures if I'm not mistaken.
couldn't the computation of the used hypotheses moved outside the kernel
not for Qed-terminated proofs.
you will have to ensure that the added constant is checked in an env with only the allowed univs/constraints, or do some analysis to ensure that checking in the full env is equivalent
There seems to be also a question of syntax. What would be an anologous of Proof using for universes? Could for instance the syntax @{i j} in a section be reused for telling which existing universes to use?
we need not stick with that syntax everywhere, we could do Proof using var1 var2 (universes a b c | a < c)
for instance (not sure if the constraints go into proof using)
not for Qed-terminated proofs.
Couldn't the "future" prepared in (misunderstanding)declare.ml
also return these used hypotheses?
no, it needs access to the body
Qed-terminated proofs should behave as axioms mostly
you have to assume that you may never know the body
we don't delay univ polymorphic proofs so no problem there
hm, we could if they were fully specified
I'd advise against making the two code paths straying further away
Computation of variables: I agree, global_vars_set
need to be called and checked against declared
in the kernel but the error message could be treated outside the kernel (with the kernel returning an anomaly instead if the computation is wrong). Also, there could be only one path in the kernel, i.e. treating the computation done in the case None
outside of the kernel. (More generally, the redundancies in the kernel do not help to make it simple.)
I think this changed recently though, at some point I remember raising anomalies and I believe @Enrico Tassi reverted that.
there is little point in raising anomalies instead of errors in the kernel imo
No, in the end I did not finish https://github.com/coq/coq/pull/14297 because on one hand I could easily work around it, and on the other end it was not a little change, IIRC
The code is still there if someone wants to pick it up, but I closed the PR since it was stale.
My feeling was that the error message in Term_typing.check_section_variables
was more talking about the surface language (Proof using
) and thus more relevant for declare.ml
or proof_using
or pretyping, than about checking.
Incidentally, the two terminologies "delayed" and "opaque" seem competing. That's also a point making the code difficult to follow. Is there a preference for one of them?
Use opaque. At some point I'd like to remove delayed from the kernel, and have only the document manager know the difference between an axiom and a delayed proof.
they are not synonyms
About always providing a type, why not to do it now? That would simplify things kernel-wise, no?
At worst, keeping a boolean somewhere for printing purpose so that we remember if the user had given a type explicitly or not.
This looks like looking for trouble kernel-wise with little benefit. I'd like to see an actual use case before deploying such a heavy machinery.
It is not clear to me that it is a more heavy machinery than what already exists for hypotheses.
checking for the presence of a hypothesis in the term is easy, checking a transitively implied dependency is harder
it might essentially require retypechecking in the worst case
You mean that taking the subgraph of a graph only by looking at some nodes of interest requires transitivity?
A propos, what is the purpose of replicating the structure of the revstruct
in the Section.entries
(compared to just the number of entries to drop)? Is it for "double-checking", or to be more explicit about what is done?
I'm asking the question wrt the approach where section variables are normal entries put in the revstruct.
By doing so, information which has to be kept in the Section.t
could be found directly in the revstruct.
Maybe it is important to keep sections apart from safe_typing???
But at the same time, it seems that it would make things simpler to let safe_typing explicitly know that some fields of the revstruct are destined to be eventually discharged.
Beware with the revstruct, we're doing weird stuff with universes.
Sections need to track the exact set of universes added, which AFAIK is not easily obtainable from the revstruct.
I don't know about this plan of calling section variables constants, I thought we would just have a new Var node
it might essentially require retypechecking in the worst case
By having section variables be constants, we could also type-check a constant at once in its restricted context.
I mean, a segment would typically have this form: SectionMarker, Universe i, Constraint c, [ctx ⊢ Variable x], [ctx ⊢ Constant d], ...
where each ctx
is a subset of the section context (similar to const_hyps
, and even a list of section contexts, one by section level). At the time of type-checking d
, we first check that its ctx is a subcontext (including universes) of the section context, then type-check it in this context rather than in the full section context. Cooking is then as before (successively abstracting over each level of ctx).
Do I miss details at this point?
this looks sound to me, but it's introducing more implicit behaviours and you can't easily synthetize the minimal set of constraints if I give you just the body
The minimal set of constraints would have to be computed outside the kernel.
Isn't it what UState.restrict
does?
The kernel would only check inclusion.
probably not, because it's very hard to get a minimal set if I just give you the body and the original set, you need to reinfer everything
"minimal" is not always meaningful for universe constraints (consider a <= max(b,c)
)
however it is meaningful for universe levels (exactly the ones appearing)
right
then for constraints we just restrict, that's not a problem
IIUC, you then mean that would work for levels but not algebraic?
What does restrict
with algebraics?
a given term can be well-typed iff a <= max(b,c)
then there is no unique minimal set of constraints between levels to check it
algebraics are handled at the time of adding constraints, they are not visible to restrict
this thread is kinda confusing, talking about 2 or 3 different things with no transitions
Then, I'm going back to a previous point (representation of section variables):
I thought we would just have a new Var node
Then a split of named_context into goal_context and section_context, with safe_push_named going to the section_context, the named_context moved outside the kernel, the VarRef referring to the new node, ... ?
something like that, the hard part being that we must track the association var -> varref
(outside of the kernel obviously)
yeah something like that
with
type variable = { sec_var : bool; id : Id.t }
type kind = ... | Var of variable | ...
type globref = ... | VarRef of variable
maybe
@Enrico Tassi had a proposal at the ceps 51 which I tried to implement.
That in a goal context, there is an extra table telling which goal variables are the replication of a section variable. Basically, this is not different to telling that some variables of the context are replications of the form x := Var{sec_var=true; id=x}
where we hide that x
has a body.
I was talking about a Const
node because Var {sec_var=true;id}
would be cheaper to implement as a Const
node than by modifying the syntax.
you're the implementor
you have the power
I'm ready here to implement but if what I implement is consistent with what others have in mind, that would be good.
@Gaëtan Gilbert : would it make sense to slightly modify the level of abstraction on which to focus when dealing with polymorphic constraints and that the type UContext.t
absorbs in its definition the Name.t array
which most of the time comes with it, i.e. in the style:
module UContext =
struct
type t = Name.t array * Instance.t constrained
let make x = x
let names (names, _) = names
let instance (_, (univs, _cst)) = univs
let constraints (_, (_univs, cst)) = cst
...
so that the analogy with AUContext
is clearer, i.e.:
global_level_1 as name_1, ..., global_level_n as name_n |- graph(global_level_1, ..., global_level_n)
vs
name_1, ..., name_n |- graph(rel 1, ..., rel n)
where, in both cases the "names" are only for printing (in the style of the names in constr
)
and in particular so that it is clearer that Polymorphic_entry
and Polymorphic
are essentially the same, the former representing binders with global levels and the second with de Bruijn indices
so that Monomorphic_entry
and Polymorphic_entry
uniformly take a single argument
Maybe more anecdotically, to accentuate the parallel with constr
and makes comparisons easier, maybe Level.Var
could be renamed Level.Rel
?
Actually, more radically, about UContext
vs AUContext
, what does it bring to have a de Bruijn abstract representation for universe graphs? Since all internal names are always fresh and (compared to lambda-calculus) there is no higher-order reduction and no need for alpha-conversion, is there a technical reason for using indices rather than global names or is it mostly for the sake of "elegance" and canonicity of the representation?
@Gaëtan Gilbert : would it make sense to slightly modify the level of abstraction on which to focus when dealing with polymorphic constraints and that the type UContext.t absorbs in its definition the Name.t array which most of the time comes with it
yes
Maybe more anecdotically, to accentuate the parallel with constr and makes comparisons easier, maybe Level.Var could be renamed Level.Rel?
beware, constr rel is 1 based but univ var is 0 based
Actually, more radically, about UContext vs AUContext, what does it bring to have a de Bruijn abstract representation for universe graphs? Since all internal names are always fresh
name freshness for this purpose is hard to guarantee at kernel level AFAIK
also substituting with lookup in an array is probably faster than with lookup in a Level.Map.t
also it makes it harder to confuse them with global universes, eg when debugging if you see a Var you know some substitution was skipped
constr rel is 1 based but univ var is 0 based
and rels are indices when universe variables are levels
Gaëtan Gilbert said:
yes
Then, do you prefer Name.t array * Instance.t constrained
or to include the names in some new kind of instance with names?
Gaëtan Gilbert said:
also substituting with lookup in an array is probably faster than with lookup in a Level.Map.t
also it makes it harder to confuse them with global universes, eg when debugging if you see a Var you know some substitution was skipped
OK to me.
About freshness, isn't an absolute file name and a fresh int
, as it is now, enough?
Pierre-Marie Pédrot said:
constr rel is 1 based but univ var is 0 based
and rels are indices when universe variables are levels
Ah, de Bruijn levels, indeed. Would it simplify discharge_abstract_universe_context
to use indices?
Then, do you prefer Name.t array * Instance.t constrained or to include the names in some new kind of instance with names?
the first one
OK
Last updated: Jun 05 2023 at 10:01 UTC