To answer some of your points, @Emilio Jesús Gallego Arias :
Emilio Jesús Gallego Arias said:
for example you know that having parsing to depend on type checking is a long time wish
No. You want the selection of the right AST (among many possible ones, eg having different precedences for /\ and = in a lattice or in Prop) be done by typing. Not have the parser use typing while it parses.
So you don't need to have the parsing environment up-to-date while parsing.
Maxime Dénès said:
- If we really wanted parsing to depend on typing, I guess we'd use something like an Earley parser, where ambiguity resolution is shifted to later phases (typing), without really breaking the parsing / execution stratification. What changes is the data returned by the parser, which become richer.
I believe this idea is from Georges Gonthier, btw.
About Earley parsing yes. The disambiguation based on typing is "Italian tech" ;-)
(which never worked well because camlp5...)
@Pierre-Marie Pédrot Do you know why syntax extensions are marked as substitutive? https://github.com/coq/coq/blob/master/vernac/metasyntax.ml#L846
They are not really, as witnessed by the identity subst function
doesn't it change the kername of the resulting object as well?
this we care about
(the fact we discard the substitution just means that we're broken for functors, but it still works for import)
Ok, I think it is quizz time (was a long time!)
Module Type T.
Notation F := False.
End T.
Module X : T.
Notation F := True.
End X.
Import X.
Print F.
Maxime Dénès said:
They are not really, as witnessed by the identity subst function
identity is a fine substitution
there doesn't seem to be much benefit to adding a IdentitySubst next to Subst / Keep
Sure, my question is more about semantics. But maybe that's fine too.
Wait, I got completely confused. My question was why are grammar rules extensions declared as substitutive.
(not about "syntactic definitions")
because it works and the other choices don't
what would they be otherwise?
Keep is broken, Dispose not expressive enough
Anticipate, better not talk about it.
It's very rare for an object to be Keep.
But Keep
seems to have been intended precisely for this kind of object, no? https://github.com/coq/coq/blob/master/dev/doc/changes.md?plain=1#L1836
Keep objects are unconditionally cached at the end of a module / functor
so if they refer to stuff that is bound, boom
How would a grammar rule refer to a functor object?
Pierre-Marie Pédrot said:
It's very rare for an object to be Keep.
unless they get it from default_object by mistake ;)
Pierre-Marie Pédrot said:
Keep objects are unconditionally cached at the end of a module / functor
no, keep is dropped at the end of modtype/functor
when you parse stuff and return a global that is bound in the parameters?
but isn't this the interpretation of the notation?
?
I'm talking about inSyntaxExtension
, not inNotation
.
but this can still refer to a name ?
how?
hm, it's true that we don't store anything named there AFAICT
Keep makes sense when you can have at most one object per instance
You mean by functor instance?
yeah
Isn't it the case for these grammar rules?
keep does not survive functors
keep is the same as dispose in functors and module types
you get them when you instantiate don't you ?
no
(disclaimer: every time I tried to use Keep it was not doing the thing I was looking for)
and when is Keep
not the same as Dispose
?
@Gaëtan Gilbert what's the typical use case for Keep then?
at discharge?
well clearly at module closure (but without subst then)
discharge does not run classify_function
keep is not the same as dispose at the end of a module (non functor, non modtype)
Pierre-Marie Pédrot said:
Gaëtan Gilbert what's the typical use case for Keep then?
The doc says grammar rules. But they are Substitutive
:D
Pierre-Marie Pédrot said:
Gaëtan Gilbert what's the typical use case for Keep then?
evil
backward compat
waiting for someone to continue https://github.com/coq/coq/pull/9408
maybe I should add it to my libobject pr while I'm breaking stuff
I've got a comment that advocates for Keep to escape functors
Yeah, was reading that ^^
this is the only sane use case I've ever came upon, and Substitute doesn't work
to break everyone who uses Keep? just make a new classificaton
I guess we know the answer know, you can juste put Dispose
what?
this doesn't work at all
Keep
also didn't, IIUC
Dispose means local
surviving functors would be more like ultraglobal
who uses Keep in the current code base then ?
beware that it's ultraglobal but substitution-bound
Generalizable Variables
whoever uses default_object without specifying classify_function
yeah, but my question is who's that ?
Pierre-Marie Pédrot said:
beware that it's ultraglobal but substitution-bound
what is?
Gaëtan Gilbert said:
Dispose means
local
Sure, but when I first read PMP's comment back when I was working on the PR, I thought Keep
was already doing what he wanted. In fact, it doesn't.
also, are there people using Generalizable variables in modtypes / functors?
who knows
@Gaëtan Gilbert every instance of the functor should declare the object for the substitution
but when poking inside a functor I also want my libobjects to be locally added
Pierre-Marie Pédrot said:
also, are there people using Generalizable variables in modtypes / functors?
The answer to this kind of questions is always yes.
and the answer to who is probably color
I know
it's the independence of premises of crappy coq features: if somebody uses it, then color uses it.
Pierre-Marie Pédrot said:
and the answer to who is probably
color
nope https://github.com/fblanqui/color/search?q=Generalizable
incredibile
by contrapositive, the answer is then no
btw, the reason I'm asking all this is that this "fake substitivity" makes it necessary to use the infamous ModSubstObjs
for both parsing and execution
with the set_missing_handler
poetry
I'm happy to say that I don't know what that is
https://github.com/coq/coq/blob/master/vernac/declaremods.ml#L76
forbidden knowledge
I don't know how it works or what fundamental problem it solves, though.
@Pierre-Marie Pédrot do you know more?
I think I used to know, but I am also happy for my sanity to have forgotten
(I was precisely hoping that grammar rules were not stored there, but since they are substitutive...)
The comment does say: Such missing entries may come from inner parts of module
types, which aren't registered by the standard libobject machinery.
but it kind of begs the question: why? :)
the commit message of 466c4cb gives a tiny bit more info
this is to save memory use
we probably can't store a delayed structure there because marshalling
Maxime Dénès said:
Module Type T. Notation F := False. End T. Module X : T. Notation F := True. End X. Import X. Print F.
Isn't the answer irrelevant for parsing? (the interpretation changes, but not the parsing rule)
Given that there are no parsing rules added in this example...
Yes, that's why I wrote I got confused. The question is valid, I guess, but for syndefs.
Well, right... syndefs add no parsing rule
But I take home that the parsing rules are the one from the signature, not the body
in general, any data should come from the signature, anything else is buggy
obviously we can't enforce that and this is a serious problem for proper programming languages that are not gallina
i.e. LtacX
In fact, I can tell you what contributed to my confusion. Look at the name of the classification function for syntactic extensions (i.e. grammar rules) : https://github.com/coq/coq/blob/master/vernac/metasyntax.ml#L845
there sure are psychopaths in the Coq dev team
eh eh
let pipe = "not a pipe"
this wins the yearly "let's mess with the brain of the readers" award
It worked on mine, at least. But I have been connected a bit too much with the real world, lately. Became less resistant.
let (+) = (-) in ...
I'll sneak that one in the middle of a file fiddling with De Bruijn indexes
Maybe a good old let rel0 = Rel 1
, then?
a smarter constructor let mkRel x = Rel (x mod 7)
not let mkRel x = Rel (String.hash (string_of_int x))
?
much safer, way less collisions!
we should definitely do | Rel of float
is lift 1.0 (lift 2.0 x)
the same as lift 3.0 x
?
no, but lift 1.5 (lift 1.5 x)
surely is
This thread is really killing my productivity ;-)
let equal_up_to_rel x y = equal (lift infty x) (lift infty y)
for free!
but it's enhancing your morale, so in the end you're more productive globallly
You're such a master of management tricks, PMP
Naive question: the code of Declaremods
, e.g. the computation of functor substitutions, is not in the TCB?
also, are there people using Generalizable variables in modtypes / functors?
If just having the flag set counts, I've done it, since I load stdpp and use modules. Is that bad?
Or need one change the settings inside a functor? Not planning to.
a good chunk of the ci libobjects seem to use keep
not sure how much is intentional
It would be surprising that it is all intentional :)
FTR, separating parsing and execution gets tricky with module name substitutions, because of the infamous delta resolvers
@Pierre-Marie Pédrot I remember you had started to work on replacing these by an equiv relation in the logical environment IIRC
@Maxime Dénès you mentioned in the other thread that parsing does not depend on interpretation, but if Bind Scope
works correctly I don't think this can possibly be the case. Consider
Definition foo n := eq_refl : n = 0.
Does the first argument to foo
get stuffed in nat_scope
? If it does, then I think this requires interpretation to decide (and if you think not, then just replace 0 with ltac:(exact 0)
or something). Furthermore, you can bind some syntax to one custom entry in nat_scope and a different one in type_scope, I believe, and hence the parsing of applications of foo
will depend on what type gets inferred for foo
. (I haven't checked this, though, and I'm not at a computer right now; can someone test it for me?)
Naive question: the code of
Declaremods
, e.g. the computation of functor substitutions, is not in the TCB?
It would make sense to me that this is not in the TCB, because AIUI there is no way to prove False
by doing something evil in functor substitutions. Definitions and proofs in functors sort-of don't exist as real objects (conceptually / mathematically) until you instantiate the functor, right?
Jason Gross said:
Does the first argument to
foo
get stuffed innat_scope
? If it does, then I think this requires interpretation to decide
Sure, but Coq is a read-evaluate-process loop. It alternates parsing and interpretation. So, n
will indeed be in nat_scope
, but only for subsequent vernacular commands. For the currently parsed command, interpretation does not matter.
Oh, is this why implicit status on context variable types only works after the end of the Context
command? And I take it scopes are the same?
Like Context (foo : forall n, n = 1%nat) (bar : foo (1 + 1) = foo 2)
will not work if you close nat_scope, but Context (foo : forall n, n = 1%nat). Context (bar : foo (1 + 1) = foo 2)
will? (I haven't tested this)
FTR, separating parsing and execution gets tricky with module name substitutions, because of the infamous delta resolvers
What does this mean, and what are delta resolvers?
Jason Gross said:
Oh, is this why implicit status on context variable types only works after the end of the
Context
command? And I take it scopes are the same?
Most certainly.
The parser does not know about types. Scopes pick notation interpretations, not grammar rules, and that happens after parsing.
Indeed you can't set the same infix at different levels, even if the scopes are different. You can do that in different grammars, but that are not selected by the type.
Jason Gross said:
you can bind some syntax to one custom entry in nat_scope and a different one in type_scope, I believe
I hope not, because indeed it would probably make the separation impossible.
Or maybe they'd end up being actually two different entries at the grammar level?
Are you saying that the following (untested) doesn't work?
Notation "[[ x ]]" := x (x in custom foo) : nat_scope.
Notation "[[ x ]]" := x (x in custom bar) : type_scope.
(or whatever the syntax is)
I hope it does not, and if it does, we should ditch it.
Seriously, the parser does not know scopes. End of the story.
Declare Custom Entry foo.
Declare Custom Entry bar.
Notation "'a'" := 1 (in custom foo).
Notation "'b'" := 2 (in custom bar).
Notation "[[ x ]]" := x (x custom foo) : nat_scope.
Notation "[[ x ]]" := x (x custom bar) : type_scope.
Check [[ a ]]%nat.
Check [[ a ]]%type. (* prints 1 *)
Fail Check [[ b ]]%nat. (* error: [custom:foo] expected *)
Fail Check [[ b ]]%type. (* error: [custom:foo] expected *)
No error is given at notation declaration time, but it does not (and it can't possibly) work. CC @Hugo Herbelin
So the asteroid missed Earth by just one light year \o/
Anyway, the refman is correct about scopes, they affect the interpretation of notations, not their presence in the grammar.
About the foo
example, the phase after parsing, interning (or globalization) will resolve "foo" as a GlobRef.Const
and to that name one can associate interpretation scopes for the arguments.
So, @Maxime Dénès, the example of @Jason Gross shows that running globalization (alone, without interpretation/typing) once we have the AST is not possible as is (since one needs to type check foo
in order to infer scopes for the arguments, which in turns are needed to globalize terms using foo
since globalization also performs notation interpretation which depends on scopes).
I mean this foo: Definition foo n := eq_refl : n = 0.
Sure, but functors already made that "trivially impossible" :)
Dunno, not for the same reason I hope.
Computing names of constants requires typing.
Are we talking about C++ and templates, or Coq here?
Unfortunately, today module name substitutions and constant name substitutions are mixed, so even computing module names (which does have an impact on parsing because they can contain grammar rules) requires typing.
The culprit is the following line: https://github.com/coq/coq/blob/master/vernac/declaremods.ml#L486
Hum, I see, but it seems less fundamental (I mean, it is because of the implementation, not because of the spec)
Oh, delta resolvers... not worrying anymore, we should grep for omicron resolvers
Enrico Tassi said:
Hum, I see, but it seems less fundamental (I mean, it is because of the implementation, not because of the spec)
Yes, that's why I wrote "today" :)
Didn't someone try to purge delta resolvers?
That's what I wrote above, @Pierre-Marie Pédrot was working on replacing them with something in the logical env IIRC
I don't recall hearing news about this line of work, but I've missed a lot of things in the last months.
I hope we can also phase the computation of these substitutions:
So 1. would be done at parsing time, 2. would require typing
Regarding the cc above:
Seriously, the parser does not know scopes. End of the story.
Indeed, the parser does not know scopes and the fact that %key
comes after the expression is a problem. At some time, I was imagining that we could require that notations respect the invariant that expressions are always well-parenthesized for some class of parentheses including (
/)
, [[
, ]]
etc. Then, we could apply a first phase of parsing which detects the ( ... )%foo
, or [[ ... ]]%foo
blocks and have the scopes parsed in the appropriate entry.
Hardwiring a notion of matching parentheses in the parser is probably worth, since, after all, it is also an important aspect of how human eyes are parsing expressions. And there are only a few standard notations around which do not respect parentheses (e.g. the Dirac notation[b〉
, but even for this notation, we could find a way to accept it in a system that hardwire a notion of matching parentheses).
No error is given at notation declaration time, but it does not (and it can't possibly) work.
Apparently a regression. In 8.11, there was an error. Thanks for telling.
Incidentally, once all issues involved in separating parsing from typing are collected, that would be useful to collect them somewhere!
Yes, absolutely, that's the point of the exploration I'm working on.
@Hugo Herbelin in general it is not possible, IMO, to use scopes in the parser. This is what the example of jason shows us, it is not just that the scopes are postfix and we parse using an LL engine.
If you parse foo x
, the scope of x
is (or may be) given by the metadata attached to foo
. You would need the parser to do the globalization phase (eg carry a context of bound variables, access the nametab...) to figure out that foo
is a global constant, then access the Arguments database for argument scopes attached to foo
and finally pick the right non terminal for parsing x
based on the scope active on the argument, if any.
This is really too much for a parser.
@Enrico Tassi : Good point. Having the parser dependent of implicit scopes would indeed require further ideas.
And even with those extensions, the parser wouldn't understand typing, so a polymorphic function is enough to defeat it...
internalization doesn't understand polymorphism either
@id T x
won't use T for the scope of x
I know, that's the thing I was hinting at... it doesn't seem scopes can solve this, and it comes up in let x : list N := map (fun x => x + 1)%N xs in ...
Are delta resolvers about Parameter Inline
?
nope
(well, they do contain the inlining information but it's somewhat irrelevant)
they're just a way to associate kernel names, so they're used every time you have functors and include
as the name implies, they sort-of make true more δ-conversions between constants
and Module N := M.
no?
also, yes
everything that introduces name aliasing
So this is how you can unify inductive types across includes? (Also Qed-opaque lemmas and axioms?)
yes
I was about to say that actually
And this requires typing somehow?
the litmus test to check for aliasing are inductive types, because there is really no other way to make two inductive equal
no, that's just a kind of association map
we can add mappings saying "the true value of this global entry is that other global"
needless to say, it's probably replete with soundness bugs
Jason Gross said:
So this is how you can unify inductive types across includes? (Also Qed-opaque lemmas and axioms?)
opaques and axioms get a definition
Module M.
Axiom x : nat.
End M.
Module N := M.
Print N.x. (* N.x = M.x *)
Compute N.x. (* = M.x *)
Oh, interesting, I was not aware that this is how they were implemented
I guess you can't do this with inductives and constructors because otherwise you couldn't use constructors for pattern matching
nb axioms are also equated by the delta resolver thing:
Goal N.x = M.x.
match goal with |- ?a = ?a => idtac end.
no, even inductive types cannot be aliased through definitions
there is a famous error in the kernel saying that this stuff is not implemented
so we need kernel support for inductive aliasing
@Gaëtan Gilbert that's another problem
this works because it was designed for this to work, and not through delta resolvers
Ltac doesn't see the delta resolvers, just constants
but because we compare canonical names, tadaa
(this is a HUGE PITA btw)
BTW is that definition a real one, or is it a special alias that Print is lying about?
Strangely enough, dnets seem to consider such definitions as automatically hint opaque, and IIRC some commands (like Locate?) knows they're special cc @Janno
there's the user equality and the canonical equality, they're only canonical-equal
env lookup uses user equality
transparent state lookup uses canonical equality
@Paolo Giarrusso for inductive types, a bit of both
the environment contains two different definitions which are the same up to a name substitution
given that we've already forgotten to substitute some fields of the inductive blocks in the past and that we have fancy checker bugs, I wouldn't bet much on the fact we actually have any spec for this
Hm, my example involved a plain axiom, a module, and an alias from Include (yes, you warned me)
(I am thinking about https://github.com/coq/coq/issues/7609 FTR)
@Paolo Giarrusso "why do you hit yourself with a hammer? because it feels so good when I stop"
I mostly only include module types!
I'll happily stop if you give me Scala-like mixins, or just give me a good way to interleave axioms and definitions in an interface.
the worst part of this request is that it probably doesn't even need any kind of kernel support
(not really excited to copy-paste the definitions in the implementation, assuming that even works; I've seen different bugs there)
the other discussion about Load-ing sections shows that there is a serious need for "mass abstraction"
I don't think it's that hard to design, but there are several competing proposals for this kind of feature
contrarily to some other devs I am not really afraid about redundant Coq features, e.g. CS vs. TC
If you have an idea I'd be interested, but I wasn't very hopeful: my first blocker when I tried to think it through is that you risk needing termination checking across functors
if you have a precise design in mind, please open a CEP
yes, positivity / guardedness is not modular
On this point I don't, but I'll keep that in mind
Re redundancy, the problem is the maintenance cost and feature interactions
well, we've been maintaining horrible stuff in the Coq code so a bit more isn't going to kill us
You joke about hammers, but it's hard to tell apart the good features from the bad ones :upside_down:
at least we should have an explicit marking of features à la Rust nightly
list of stable features := [] ?
nah I guess ssreflect’s pretty good until mixed with TCs :-D
my point is that there are a lot of small things that are 1. easy to implement 2. out of the kernel , that would make the life of our users much easier
for instance, the non-modularity of positivity / guardedness only matters if you want the abstraction to be statically type-checked
if you were to check at runtime that all instances are fine, who cares ?
but IIRC that’s not really how functors work, right? Still, I can fix that problem with manual desugaring, so pretyper or elpi might be able to as well
Functors are (un)fortunately statically checked
so, your salvation won't lie in functors
but I really don’t want the abstractions to leave traces in terms :-|
not sure what you mean
you could encompass a different kind of functor system
and I _can_ mix axioms and definitions in a module type, as long as I close and open a new one, every time I switch across the two…
where you have no guarantee that applying a functor to an argument will succeed
hmmmmm, this kinda hurts my ML-module-system sensibilities, but maaaybe?
what I meant was that replacing functors with “runtime” abstraction produces bigger terms
at some point you have to bite the bullet: ML is not MLTT
ah, yes
I _think_ MLTT has much more modular typechecking, except maybe inductives
but then if you had an external metaprogramming feature that looks like modules, tastes like module, but is the canada-dry of statically typed modules you wouldn't see the terms in the abstraction either
MLTT has much more modular typechecking
ah, sweet summer child
MLTT is catastrophical from the point of view of abstraction
you may depend on the computational content of a term without any way to control that
at least Agda does not normalize during guardedness checking
yes, I understand how translucent definitions work
but that's only the beginning
then you have universes
and as you mention, positivity / guardedness
universes and guardedness aren’t a modularity problem in Agda.
fair enough, but you pay for it
I don't think that people are happy to have to explictly quantify over universes
(and it's still non-modular in some sense)
agreed, by copy-pasting your map function or having to use (unsafe?) sized types
(zomfg: I've written everything using Set but I actually wanted to refer to Set1)
I just meant that Agda is (closer to) separate typechecking/compilation in Cardelli’s sense.
different set of problems, of course.
I hear the complaint
re Agda universes, they’re annoying, but idiomatic Agda always works, and I can’t say the same for Coq
but I can’t comment on those fairly since I haven’t yet really attempted to understand them in anger, I just call my universe colleagues
what do you have in mind?
"Agda is a proof assistant designed to prove the strong termination of the simply-typed λ-calculus"
I've never seen a convincing proof of scalability for Agda, so you cannot convince me that "idiomatic Agda always works"
(relatedly, I am actively looking forward to the WITS talk https://popl22.sigplan.org/details/wits-2022-papers/6/Using-Dependent-Types-at-Scale-Maintaining-the-Agda-Standard-Library)
I agree, sorry, pattern matching and definitional equality are not sufficient tactics, and performance isn’t acceptable. but I mean universes in Agda
there’s 3-4 rules and everybody understands them (and there’s forcing for datatypes, okay). Contrast with https://gitlab.mpi-sws.org/iris/stdpp/-/issues/80.
oh, but that's a problem with Coq stdlib
for the rest, you’ll have to ask Janno or Gregory, I just know somebody complains about universes once a week I think
agda universes are pretty smilar to Lean universes right? except for prop predicativity
unfortunately I'm part of the people who eat universe poly on a daily-ish basis, and they're hellish
but the fully-explicit style of Agda doesn't scale much either
explicit universe polymorphism + algebraic universes + no special universe inference (beyond the usual I think?)
in Coq you can also be fully explicit (barring the poor implementation) and it's still not usable, even assuming we had support from the system
but as I said upfront: I don’t understand the universe problems we ran into Coq. Separation of concerns, you know.
oh, forgot: Agda has no implicit cumulativity.
(my personal taste about implicit cumulativity depends on the time of the day, at some point I am convinced it's critical, at other points I am convinced this is the worst idea ever)
it would be nice to have a flag to disable it
but we need to be able to put algebraics in instances first I think
(my educated guess is that your belief is conditioned on your current hat, i.e. whether you're using type theory or you're implementing it)
which hat has which belief?
√2/2 implementor + e^π user
totally irrational
@Paolo Giarrusso Related feature requests:
From modname Include constants
or some sort of selective include, so that when you are defining a module according to a module type, you can include the definitions from the module type without having to re-prove/re-define them.I'm actually now wondering how close you can get with sharing constraints plus submodules...
That is, https://sympa.inria.fr/sympa/arc/coq-club/2021-11/msg00016.html deserves further study.
Hm no, neither that nor Include constants is really enough; if you go this way, you might end up with MixML.
For instance, in the following (terrible) sketch of a POSIX interface, holes and definitions alternate, so your definitions are an _input_ to the body. (Please ignore the actual spec, I’ve never worked on POSIX specs). Splitting the definitions would be the logical next step.
Module Type posix_interface.
Parameter page_size : N.
Definition page_aligned (vaddr : N) : Prop := …
Definition page_aligned_addr : Type := { va | page_aligned va }.
Parameter page_mapping : page_aligned_addr -> mpred. (* mpred is the type of propositions of some separation logic. *)
(* Part-of separation-logic spec for mmap*)
Definition mmap_spec :=
\arg{va : page_aligned_addr} (* lots of other args *)
\post page_mapping va.
End posix_interface.
Module Type posix_interface_x86.
Include posix_interface(constants).
(* Define page_size *)
End posix_interface_x86.
Module Type posix_interface_linux.
Include posix_interface(constants).
(* Define page_mapping, covering OS state. *)
End Type posix_interface_linux.
Module posix_interface_x86_linux.
Include posix_interface_x86(constants).
Include posix_interface_linux(constants).
(* Making sense of this combination requires mixing linearization; intuitively, it’s sound because the definitions aren’t conflicting, but in fact we can just detect the diamond and remove duplicates. *)
(* I’m ignoring architecture-specific components of page mappings for simplicity. *)
End posix_interface_x86.
the above example is slightly made-up, but I do run into similar (dependent) problems.
so, you see, when I joke that I’d want Scala in Coq, I’m not 100% joking. OTOH I have an encoding of the above, and that coq-club post might make it better…
I think I managed to separate the two phases in the module system. Now I have to deal with plugins, to complete the POC.
Wondering why we mark Require
in modules as non-export, and then we export on the side (so not storing the action in a libobject): https://github.com/coq/coq/blob/master/vernac/library.ml#L383
@Gaëtan Gilbert do you know?
because Require is Anticipate, so if you put the export in the object it will get reordered
Module M.
Definition x := 0.
Require Export X. (* shadows x *)
End M.
Import M.
(* x should be X.x *)
So why don't we always split in two objects? (i.e. removing Export
handling from inRequire
)
not sure how it works with end_library though
Would it be correct?
maybe
Ok, thanks, I'll try. Because testing if we are in a module currently depends on the typing env...
I can test it by other means of course, but if we can just remove the branching, it would make the code simpler.
sounds good
https://github.com/coq/coq/pull/15347
Btw I wonder how these Anticipate
objects deal with commutations with commands changing the loadpath...
it's resolved before adding the object
@Hugo Herbelin Small question about where
notations: can they introduce parsing rules? I thought not, but they call syntax extensions: https://github.com/coq/coq/blob/master/vernac/metasyntax.ml#L1688
Removing this line doesn't seem to make stdlib build nor test suite fail.
Maxime Dénès said:
I can test it by other means of course, but if we can just remove the branching, it would make the code simpler.
turns out to fix a bug too ;)
Maxime Dénès said:
Hugo Herbelin Small question about
where
notations: can they introduce parsing rules? I thought not, but they call syntax extensions: https://github.com/coq/coq/blob/master/vernac/metasyntax.ml#L1688
They don't introduce new parsing rules (the inSyntaxExtension
registration is anyway done after parsing of the declaration and cancelled after interpretation of the declaration). Unfortunately, I don't remember why I added this registration. It seems to be a no-op. You may try to remove the line.
Good news: I have a POC that is able to parse the standard library. In the coming days, I'll sketch a plan with an overview of the required patches.
Pitty that we don't have a globalization phase that is independent from typing, though. It would have been nice for IDEs to be able to quickly know how to resolve references, without fully executing a document.
Looking forward to the update!
Maxime Dénès said:
Pitty that we don't have a globalization phase that is independent from typing, though. It would have been nice for IDEs to be able to quickly know how to resolve references, without fully executing a document.
Hugo and I were doing some work to that extent, for coq-layout-engine, so certainly we can make things a bit more usable.
Oh! Can you say a bit more about what you were working on?
Maxime Dénès said:
Pitty that we don't have a globalization phase that is independent from typing, though. It would have been nice for IDEs to be able to quickly know how to resolve references, without fully executing a document.
I know two offenders. Notation scopes attached to inferred types; the code for inductive types / records (which mixes both phases).
For the latter one I do have some code in Elpi with does just the globalization. We could see if this can be used to separate the phases (I don't cover all cases in my code, eg mutual inductive or records with notations)
About the scopes, I really don't know how much this is used in practice. IIRC scopes attached to types are not very reliable, since types are compared syntactically. I would not be too surprised if most of mathcomp would work fine without that feature.
Re notation scopes for inferred types, that’s used as soon as you write Definition succN n : N := n + 1. … succN 0
. In Iris canonical structures, that’s used quite a bit eg by Bind Scopes with bi_car.
(where bi_car
projects the carrier from a canonical structure) — tho that line is a recent addition.
Maxime Dénès said:
Oh! Can you say a bit more about what you were working on?
Some time ago I wrote a new printer prototype à la Isabelle [with semantic information] , still on early stages [I did a talk here couple of months ago] , but we have similar needs. Hope I can do a release soon, but workload not looking good
Interesting. Did it not require executing the document?
Depends on the information, in some cases you can do with parsing, in some cases you need to run full pretyping, but usually you want only a partial run of the whole elaboration as you indicated
How much of this separation did you work out in your project? Did you split elaboration up?
Enrico Tassi said:
IIRC scopes attached to types are not very reliable, since types are compared syntactically.
Types attached to scopes reuse the mechanism of "classes" used for coercions, and it is not strictly purely syntactic (which in turn means that it requires typing...). I don't know if this is what "not syntactic" would mean for you, but computation of arguments scopes could stepwise unfold head constants until finding a constant which is known, such as, for instance Definition f (a: id nat) := a.
assigns nat_scope
to a
, rather than concluding that id
is not bound to a scope and assigning no scope to a
? In this case, we could make a PR to support this refinement.
Otherwise, am I right that to globalize through a notation whose interpretation is unknown (because typing dependent), knowing which ident subterms of the notations are bindings would be enough, or is it too naive? Conversely, what are the typical examples where the interpretation of a notation needs to be known to globalize references?
Being smarter (eg unfold etc) would be possible but both expensive and would definitely make the phases we may want to separate more entangled. What I had in mind when I wrote "unreliable and syntactic" is that in the context of MC Definition f x := x + 1
may infer for x the type ZMod.sort int_ZmodType
which is convertible to int
but not syntactically equal hence I bet no scope is automatically attached to x. Imo this weakness is a good thing, since we have some hope that globalization and typing can be decoupled.
If we had to touch that code, I'd suggest to attach scopes to arguments only if the type is given by the user and stay purely syntactic (so that we can globalize without typing). So Definition (x : int) := x + 1
would get the scope, because globalization is enough to get your hands on int
. Similarly Implicit Type x : int
would be taken into account for that (without typing anything).
Before arbitrating being whether arguments scopes should be more or less syntactic, do you have an example where globalization of references breaks currently?
I don't have a running example because we don't have a PR which tries to just parse and globalize a file, but the example of Jason seems to do it:
Definition f x := x + 0. (* x inferred to be of type nat, hence nat scope *)
Notation "'a" := foo : nat_scope.
Notation "'a" := bar : bool_scope.
Check f 'a. (* without typing `f` there is no scope attached to its argument *)
So, if you use just globalization to provide a user interface the "jump to definition" feature, when you click on 'a
do you jump to foo
or bar
? If the scope was not depending on typing, eg given explicitly via Arguments
, then globalization would be enough to pick the right notation interpretation (one would still need to execute some parts of the Arguments command).
Overall the idea is to separate the processing of the document into phases of increasing complexity and cost. Parsing is the first one, and lets you already give useful feedback to the user. Globalization comes next and would give you nice stuff like jump to definition, or semantic syntax highlighting (bound variable x
different from global constant x
). All the rest (typing, tactic execution, ..) would give you everything, but it is much more expensive.
Enrico Tassi said:
Overall the idea is to separate the processing of the document into phases of increasing complexity and cost. Parsing is the first one, and lets you already give useful feedback to the user. Globalization comes next and would give you nice stuff like jump to definition, or semantic syntax highlighting (bound variable
x
different from global constantx
). All the rest (typing, tactic execution, ..) would give you everything, but it is much more expensive.
So, in some sense, the example says that the globalization could itself be split into two phases. A phase for global references with relatively low complexity and cost, and a more expensive phase for disambiguation of notations. Am I right?
Last updated: Oct 13 2024 at 01:02 UTC