Hi, on a way trying to address ceps #51, I spent some time on cooking.ml
. Is anyone working and having plans with it already? In any case, I would like to share a rather big factorization I could do of the code, having all the technical work (adding term and universe parameters to generalized globals, substituting vars by rels for both terms in universes, abstracting over the section context of local both terms and universes) concentrated in one function working on a refined formulation of what were called abstr_info
and work_list
. I'd be curious of opinions about it before proposing it as a PR. It is mostly one commit, with some documentation, but it is for instance difficult to judge it the documentation is clear enough.
If time allows, I'd be tempted to continue with some small cleaning of safe_typing
and term_typing
. Has anyone planned related projects, or would like to work on such kind of cleaning/documentation project, so that others can eventually enter and understand the structure of the code more easily?
This kind of cleaning/documentation would suspectingly help the work on MetaCoq's "safechecker".
Regarding safe_typing, I have some stuff pending for a long time about monomorphic universes and kernel side-effects, which are blocked for technical reasons on issues related to the higher layers (namely the STM). I don't know if this would interact much with your cleanup though.
Do you have the branch online?
By the way, was there a problem with #9380? I feel that we may gain a lot at factorizing the code around universes: it looks like we are doing always similar treatments which could be treated more uniformly.
About term_typing
, I don't know why there is a detour by Cooking.result
. It looks like a useless and easy cut to eliminate.
I don't know why the main typing functions are called translate
rather than infer
(or check
). This looks like obfuscating their meaning to me.
@Hugo Herbelin IIRC the problem with that PR was that it was duplicating monomorphic constraints in several places. We should strive to remove them from all kernel datatypes except the environment, because that's the only place where they make sense (barring private monomorphic constants)
duplicating pieces of global state is not only error-prone, it's also inefficient
currently there is still a bit of nonsense in the kernel that we use to print "monomorphic constraints introduced by a constant"
but this is not sound, constraints really are global
thankfully it's only used by printing
(I think you mentioned that you wanted to remove this, which I support 100%)
OK, I see, I thought the problem was about unifying Polymorphic
and Monomorphic
. But the attachment of global universes to constants is indeed a bit strange.
Even in printing. You see (* {foo.u foo.v |= *)
for constants with monomorphic universes while the only link they have with the constant is that they have been introduced by this constant, i.e. it's only about the name.
yes, but even this is handled outside of the kernel
(i.e. this is essentially debugging stuff)
So, if I understand correctly, you don't want to merge Monomorphic
and Polymorphic
. What you want is to move the universe levels out of the "monomorphic" constant and to keep in Monomorphic
only the constraints.
In which case Monomorphic
becomes mechanically a subcase of Polymorphic
.
No no, even the constraints should get out.
the two things don't behave the same
a monomorphic definition with only constraints is not the same as a polymorphic one with only constraints
OK, and the constraints between non-polymorphic universes in Polymorphic
would also go out of Polymorphic
?
I don't care, but I think that there is no reason either
you might want to actually use the "weird" behaviour mentioned in an issue before
it's useful to quantify over ghost constraints
You mean because of lazyness of inclusion of constraints between non-variable universes in Polymorphic
?
it's not laziness, it's quantification
a monomorphic constraints is just a polymorphic one with a set of constraints generated by default
(more generally, I would really like to separate universe constraint extension from constant definition per se)
Separate universe constraint extension from constant definition: sounds good a priori.
So, what do you intend to do with constraints between non-variables in poly constants?
Leave them around? You have to consider them anyways by transitivity.
As soon as you allow constraints between globals and variables, you must allow constraints between globals
and the former appear as soon as you mix mono with poly
Re-asking just to be sure: Do you intend to have the generators of constraints which are between non-variables once for all in the global env, or to re-add them at each use in another contant (and have them only increasing the local "global graph" of a constant)?
the second case.
i.e. don't change anything to what we have now
the first situation is not closed under transitivity
if you allow two constants with var < global then by uniformity you must allow constraints between globals
But then (wrt "the second case"), what to do with what is called monomorphic now? Will it also become "global graph" local to a constant?
no
why not?
because you don't want to make everybody polymorphic
sometimes you might have constraints local to a definition
but this is hard to compute
but in general you must have the global graph somewhere
But then the argument may apply in the other direction. You might want to have polymorphic definitions with global constraints not local to a definition?
yes, but this you can do
it's already possible today
But then, you still don't need to keep a notion of monomorphic constant.
I agree that in theory you could make everything polymorphic
but there is a reason you don't want to do that: efficiency
This I want to understand.
universe polymorphism is extremely costly, and for various reasons
if you have to recompute constraints every time you use a constant, even if they are the same, this has a cost
You mean that even the degenerate form of universe polymorphism is costly?
I am not even talking about substitutions
not sure what you're talking about
I mean: is it to re-add the delayed constraints every time which is costly?
anyways, I think we should have some live discussion about the topic with other devs working on the same kind of stuff
yes
but not only, substitution is costly
plus you have to store all these instances everywhere
"the best proof is the proof you don't write"
there is a reason why proof by reflection is efficient
So, a polymorphic constant which has no delayed constraints (i.e. which is the same as a monomorphic of which the associated graph actually lives on its own declaration) is not costly?
this is the same as a monomorphic constraint, barring potential weird code paths
but these things don't exist in practice
The weird code paths are my points. They are consuming useless our energy and makes things more complex to users too.
but these things are not in the kernel
the problem lies up ahead
the kernel is mostly ok, if you forget about the horrible side-effect system
Yes, the management of "side effect" in the kernel is not easy to follow.
it's also wrong
About the kernel, it would still be better it it had more uniformity. It looks a bit like a draft paper, missing the last week of polishing before submission to reviewers.
Afaicu, there are still universe problems at the interaction between declare.ml
and safe_typing.ml
. Cleaning the code would help to find them.
Agreed.
but it's also due to intricate design issues
I have a PR that removes futures from the kernel, but it's a mess in the upper layers
It seems so.
It's not even clear what the invariants are
even in the current situation
making the code purely functional exposes these issues in plain light
About making the code purely functional, I fell on some issues too. What are the plans?
no idea
but we should have some meeting after the summer anyways
OK
Back to the kernel, I'm considering also merging push_named_*
and add_constant
. No objection?
dunno, it depends on the final design
I'd like some presentation on the topic as well
Incidentally, about the double status of section variables as local and global objects, I made some progresses, but this is where I blocked on the dependency of proof states on the global environment.
My current conclusion is that we don't need to define x:=S.x
in the goal context but rather S.x:=x
outside the goal context!! (where S.x
is the global copy of the section variable).
But for that, we need to be able to modify the env_globals
local to an evar...
which is incompatible with the way it works now.
why are you copying section variables?
why are you copying section variables?
I used the word "copying" informally. The point is that it we want section variables in a goal to behave like any other goal variable, we need to be able to hide/clear them on demand. But even if we clear them, they continue to be there in the global context, potentially depending in global constants of the section. So, we may need two "copies" (one in the global env for reference from global constants, with say a qualified name, and one in the goal context, with an atomic name). Which one should be called a copy of the other is not clear. (I'm not sure I answer your question though.)
we want section variables in a goal to behave like any other goal variable
do we?
do we?
A typical situation where "we" (or at least some persons, I don't know how to estimate how many) want to clear a section variable is destruct n
where n
has been "consumed" and there is no reason to have its name continuing to clutter the name space.
Or when a tactic loops and works over all assumptions, so if you exclude a section variable from using, the Qed fails.
(I'm sure that point has been brought up before, together with the idea to hide those variables in the upper layers only?)
isn't it just that we want to stop treating section variables as assumptions?
Last updated: Oct 13 2024 at 01:02 UTC