One of the most common hard to solve issues that comes up in the Iris community is unsatisfiable universe constraints for template universes -- see https://gitlab.mpi-sws.org/iris/stdpp/-/issues/80 and its backlinks, such as https://gitlab.mpi-sws.org/iris/stdpp/-/issues/134, https://gitlab.mpi-sws.org/iris/stdpp/-/issues/207. People end up proposing desperate measures like copying significant parts of the coq stdlib just to get a fresh universe parameter.

Is there any hope of progress on this issue in the near future? Universe polymorphism was added many years ago, but since then there has been no visible progress on making core stdlib definitions like `list`

or `relation`

polymorphic. I'm not aware of anything we could do in Iris/stdpp to avoid these issues (other than "do not use Coq standard types" I guess -- but obviously we want our typeclasses to work with standard lists etc).

Cc @Robbert Krebbers @Andrew Appel

I don't think this can be solved at the "Coq project" level - I think you want this to be solved by Coq devs? Then perhaps I should move this to #Coq devs & plugin devs?

there has been discussions of moving Stdlib outside Coq, is that relevant here?

Are there any known issues with "just" replacing all template polymorphic types with universe polymorphic in the stdlib? Has anyone actually attempted such a fork?

there has been discussions of moving Stdlib outside Coq, is that relevant here?

I am not sure that's relevant as long as the version of stdlib that is moved out of Coq is not universe polymorphic.

Surely one could roll custom universe polymorphic definitions of lists, options, products, sums, sigma types, etc in std++ or a fork of stdlib, but then we would be incompatible with the rest of the world.

So, to make sure that Coq projects can interact using at least the most basic data structures, we need a common stdlib that is universe polymorphic.

Mario Carneiro said:

Are there any known issues with "just" replacing all template polymorphic types with universe polymorphic in the stdlib?

Theoretically, not really. Practically, yes. In a universe polymorphic setting, universe levels are not just constants; they are variables that actually get passed as arguments to any generic enough function (e.g., a list recursor). Such arguments do come with a cost. For example, a few weeks ago, I encountered a function (in Mtac2, if I remember correctly) that was receiving 888 universe variables as arguments. This means that the system (or the user) has to fill these 888 arguments whenever the function is invoked. Moreover, these arguments presumably add tens of thousands of universe constraints whenever the function appear, which the system has to take into account one way or another, for consistency reasons.

The thing is, there is no good reason for such a simple function to have that many universe variables as arguments. The user who wrote that function should have carefully reviewed all the involved universe variables and unify those for which it did not make sense to have them float independently. And we now get to my point. You cannot take an existing development (especially not the standard library on which almost all the other developments depend on) that was developed with a monomorphic mindset and blindly make it polymorphic. The combinatorial explosion of universe variables will bring the system to its knees.

As a developer of Mtac2 and the kind of optimist who has repeatedly tried to fully annotate all universes in (small) coq libraries I can report with some certainty that the idea of systematically reviewing universe annotations for any nontrivial library (or even writing them manually from the start) is ill conceived. The list of issues with this approach is so long I cannot keep it in my head. Here's what I can easily remember:

- You cannot annotate notation that contains universe polymorphic terms. This alone disqualifies the approach for many libraries.
- This also means you cannot find out what universes are hiding in notation unless you disable the printing of notations. (But you'll need
`Set Printing All`

anyway to inspect universes.) - There is no way to specify individual universe arguments without giving
`_`

for all the remaining universes. And universe arguments are based purely on position, despite the fact that they have names. This is a nightmare during development and also generally for maintenance. - You basically have to annotate either everything or nothing (at least in function signatures). If you skip one intermediate definition you lose control over the arity, which makes it harder to annotate downstream definitions. And you'll inevitably wonder if the unannotated function is the one that messed up your universe constraints.
- Coq has many places where types appear implicitly and it will usually stuff these places full of fresh universes variables. Return types of match expressions are the worst culprit here and they usually require manual annotations to avoid floating universes.
- I have had to learn repeatedly that I am not smart enough to actually annotate all my definitions. They may work fine with 888 automatically generated universes but if I go in and perform some "obvious" optimizations I end up with inconsistencies. Debugging universes inconsistencies in terms with hundreds of universes is not particularly easy.
- There have been so many bugs with universe polymorphism that it is entirely unclear if a failing test case in a universe polymorphic development is a problem with the development or with Coq.

Out of curiosity, how does Lean solve this problem?

Robbert Krebbers said:

there has been discussions of moving Stdlib outside Coq, is that relevant here?

I am not sure that's relevant as long as the version of stdlib that is moved out of Coq is not universe polymorphic.

Surely one could roll custom universe polymorphic definitions of lists, options, products, sums, sigma types, etc in std++ or a fork of stdlib, but then we would be incompatible with the rest of the world.

So, to make sure that Coq projects can interact using at least the most basic data structures, we need a common stdlib that is universe polymorphic.

Given what Guillaume says about system performance/stability, I'd say chances are slim of getting universe polymorphic Stdlib unless Stdlib's development is separated from Coq's. The polymorphic Stdlib would likely need to be standalone and optional for some time before integration into "real" Stdlib, easiest to do in a separate release cycle

Robbert Krebbers said:

there has been discussions of moving Stdlib outside Coq, is that relevant here?

I am not sure that's relevant as long as the version of stdlib that is moved out of Coq is not universe polymorphic.

Well, the fact is that Coq developpers are mostly not stdlib developers. This means we are doing a terrible job at encouraging and even incorporating any stdlib contribution and thus its development has almost stalled for maybe a decade. Some of us think that making the stdlib an actual library, rather than a neglected subpart of Coq, and promoting it and its subcomponents of various quality (some having better alternatives in the platform), could help foster its development. There is mostly an agreement on the stdlib issue, we still have to agree on a concrete path forward.

Robbert Krebbers said:

Out of curiosity, how does Lean solve this problem?

As far as I know (but I might well be wrong), when you write `Type`

, you are actually writing `Set`

. And if you need the equivalent of Coq's `Type`

, you need to be completely explicit with respect to universe levels. In particular, the system will never infer anything. This prevents the combinatorial explosion, but this means the user has to do a lot more work.

I have also been using explicit universe annotations in some coq developments (for instance in that one because we are flirting with inconsistency) and I can only agree with @Janno on the current terrible state of developping universe-polymorphic code in Coq.

Even when the number of universe variables remains understandable (~less than 10 variables bound at toplevel), Coq does not yet provide suitable tools to develop universe polymorphic proofs with tactics (e.g. easily inspect the local universe graph of a proof, control whether new universes can be added or whether they should be unified/minimized as much as possible, force minimization in the middle of a proof).

On the brighter side of things, I do hope that the ongoing work by @Matthieu Sozeau on algebraic universe expressions will help write manual bounds on universe polymorphic code conveying better the intent of the developer (algebraic level expressions are one thing that both Lean and Agda feature but Coq does not really, or at least not algebraic expressions cannot be used everywhere). We should gather the different issues and corresponding feature requests somewhere (for instance there is this issue tracking the matter of universe binders in notations).

Robbert Krebbers said:

Out of curiosity, how does Lean solve this problem?

I think the different features are crucial:

algebraics in instances | cumulativity | |
---|---|---|

Coq | no | yes |

Lean | yes | no |

not having cumulativity means polymorphic universes are much less free, instead of having a universe minimization heuritic to turn <= constraints into = you can just run unification

algebraics in instances are important because without that it's not really possible to disable cumulativity, consider for instance `prod@{u v} : Type@{u} -> Type@{v} -> Type@{max(u,v)}`

then a 3-tuple is `prod@{x y} P (prod@{a b} Q R)`

with cumulativity you get a constraint `max(a,b) <= y`

(equivalently `a <= y /\ b <= y`

)

without it you get `max(a,b) = y`

Thank you for explaining the differences between Coq and Lean.

So with the work by @Matthieu Sozeau Coq would move in the same direction, having algebraic level expressions instead of cumulativity? Some questions:

- Is this something that allows for a graceful way of porting libraries, or will we at some point get a split between libraries that use template universe polymorphism and those that use algebraics?
- Do algebraics have other known problems in Lean? In particular, @Guillaume Melquiond says it is more work, is that mostly for library developers (say, the developers of std++ and Iris) or for also for users of these libraries?

having algebraic level expressions instead of cumulativity?

we would have both

we would have both

Could you elaborate on the impact of that on libraries.

Particularly, can libraries use the two together? Or do they have to choose between one or the other? And if they use both, will they get the union or the intersection of the problems of both approaches?

Guillaume Melquiond said:

Robbert Krebbers said:

Out of curiosity, how does Lean solve this problem?

As far as I know (but I might well be wrong), when you write

`Type`

, you are actually writing`Set`

. And if you need the equivalent of Coq's`Type`

, you need to be completely explicit with respect to universe levels. In particular, the system will never infer anything. This prevents the combinatorial explosion, but this means the user has to do a lot more work.

AFAIK they are polymorphic by default, so it's not a lot of user work. the only place where user work is needed is when lifting from one universe to another, as they dont have cumulativity. but I expect that to rarely be a problem for things like Iris...

All of the fancy features that @Janno and @Kenji Maillard mentioned are missing in Coq, don't seem to be needed in Lean. Or maybe I am just misunderstanding.

@Mario Carneiro should be able to fill us in on what Lean does. :)

Karl Palmskog said:

Robbert Krebbers said:

there has been discussions of moving Stdlib outside Coq, is that relevant here?

Surely one could roll custom universe polymorphic definitions of lists, options, products, sums, sigma types, etc in std++ or a fork of stdlib, but then we would be incompatible with the rest of the world.

So, to make sure that Coq projects can interact using at least the most basic data structures, we need a common stdlib that is universe polymorphic.

Given what Guillaume says about system performance/stability, I'd say chances are slim of getting universe polymorphic Stdlib unless Stdlib's development is separated from Coq's. The polymorphic Stdlib would likely need to be standalone and optional for some time before integration into "real" Stdlib, easiest to do in a separate release cycle

that's very bad news for the future of iris and std++ then :(

that's very bad news for the future of iris and std++ then :(

It's particularly bad for users that wish to combine Iris and std++ with any other Coq library.

For example, we are extremely happy that VST has recently been redeveloped on top of Iris, but @Andrew Appel pointed out that to port CertiGraph to VST-Iris he immediately ran into universe problems. (See https://mattermost.mpi-sws.org/iris/pl/4puh45xgoidktmcioxzec1tguo) It's entirely unclear what to do here and it is unfortunate that we don't have the slightest idea how to move forward.

There are many questions for which we don’t have an answer yet, but the current status of my work is:

- the new algorithm can handle max, +n and arbitrary constraints between them fine. If we just switch algorithms, staying with a monomorphic/template poly stdlib, we get a worst case 50% increases in the bench right now, but that’s really concentrated on setoid_rewrite being very slow in some instances. We are investigating why but it seems at least in some cases it is due to many useless successes due to unifying global universes (at least in math_classes) and in general the implementation of the union-find in the new version being less performant than the old one (but it solves a more general problem). The jury is still out on whether and how we can solve that issue. Moving this development to univ-polymorphism might solve this particular issue entirely as there would no longer be global universes bounding many others that we would try to unify, resulting in merges of very large equivalence classes.
- the new algorithm should provide a much nicer interface for writing polymorphic code, and a more principled minimization algorithm. It would basically amount to doing fourier-motzkin style quantifier elimination, to get rid of variables the user did not explicitely quantify on while maintaining validity of the constraints. To test the usability, we definitely need to develop an alternate stdlib with univ-poly definitions. Actually there is also sort polymorphism that was recently integrated, and together with @nicolas tabareau and others we will try to make a sort poly and univ poly stdlib/prelude and see where that gets us. Expect some news about it in the coming months.
- regarding porting of user developments, it is unlikely that one will just do Set Universe Polymorphism and have everything work out nicely in all cases, indeed there are surface level issues to be solved first (notably binding of universes in notations/metalanguage). However I hope that we can set defaults in the monomorphic mode so that libraries that stay monomorphic can still be checked mostly unchanged. For example, the polymorphic cumulative list type list@{i} nat has basically the same conversion rule as the template-poly lists (list@{i} A is convertible to list@{j} A for any i and j), so a priori not much should need changes when working with lists. We’ll have to try this on top of the ported stdlib to be sure.
- regarding restricting cumulativity, it would certainly be possible, however that would certainly preclude the above possibility to maintain some compatibility either template-poly devs and force to add much more annotations than without it. A bidirectional rule on application of global references that says universe parameter i should be filled with the universe inferred for a later binding (A : Type@{i}) could be a good help to avoid the proliferation of universes.
- regarding performance: it is clear that we will have many more bindings and universes everywhere, but we can hashcons them, substitute lazily etc just as Lean and Agda do. One hope is that by working in much smaller graphs of universes in polymorphic definitions, we will also get benefits.

One thing that might be worth mentioning, too: part of the reason why we get some constraints "at a distance" between global universes comes from the fact that function type cumulativity is only equivariant on domains, ie for $A \to B \le A' \to B'$ to hold me must have $A \cong A'$ ($\le$ is cumulativity, $\cong$ is conversion). This is in particular why η-expanding can mitigate the issue, because it is a poor man's way of getting contravariance on the domains.

So in principle, having contravariant instead of equivariant domains for function types might to some extend alleviate the issue of interaction at a distance between different libraries constraining the universes of, say, `list`

. I am not sure what the impact of such a change would be on performances (less equality constraints in general means less universe minimisation and thus bigger universe graphs), or how much exactly it would help in solving the issue at hand. But I feel it might be a direction worth trying too.

`list`

is a template universe though so none of this equivariant vs contravariant should even matter here until it is switched to being universe polymorphic, isnt it?

eta-expanding `list`

also means typeclasses will not just work any more on values of type `list T`

so that does not look like a workable solution...

The problem is that as soon as one adds l <= l’ one must check if l’ <= l is already there to potentially merge them, not sure it would help in this particular case. Also the problem comes from comparing lambda-abstractions not products…

@Matthieu Sozeau there is also the notion of constant with arity that could avoid mentioning inferable universe levels, and thus preventing elaboration to generate fresh universe levels for them.

Coming back to the sort and universe polymorphic stdlib/prelude that we plan to develop during next CUDW, It would be great if people discussing in this thread could join (either in Nantes or remotely). I think a successful experiment will require to develop both some infrastructure on the OCaml side and porting on the Coq side.

It would be great if people discussing in this thread could join (either in Nantes or remotely).

I would be interested to be involved, but July 1-5 is also the week before the POPL deadline and the end of the semester; so I am not sure how much time I will have then (likely none).

Robbert Krebbers said:

that's very bad news for the future of iris and std++ then :(

It's particularly bad for users that wish to combine Iris and std++ with any other Coq library.

For example, we are extremely happy that VST has recently been redeveloped on top of Iris, but Andrew Appel pointed out that to port CertiGraph to VST-Iris he immediately ran into universe problems. (See https://mattermost.mpi-sws.org/iris/pl/4puh45xgoidktmcioxzec1tguo) It's entirely unclear what to do here and it is unfortunate that we don't have the slightest idea how to move forward.

There seems to be quite a few people in favor of splitting off Stdlib from the Coq repo (me included). I meant that such a split has a chance of making things better for std++/Iris in terms of universes, since there would be more flexibility in what Stdlib can/will do.

There seems to be quite a few people in favor of splitting off Stdlib from the Coq repo (me included). I meant that such a split has a chance of making things better for std++/Iris in terms of universes, since there would be more flexibility in what Stdlib can/will do.

What I gather from the above discussion is that there is no point in such a split at this moment: if we use current universe polymorphism in a fork of stdlib, it will cause all kinds of problems. We have to wait for Matthieu's current work on algebraic universes to be completed. Or am I missing something?

(That said, I don't oppose to splitting off stdlib.)

Yep, my work on algebraics and work on surface-level issues for universes and sort-poly

at least if we split off Stdlib, we have the possibility of an "official universe-polymorphic fork" of Stdlib that could live in opam and used by whoever wants it, and installed on top of the `coq-core`

package

we can even distribute it as part of the Coq Platform

that sounds like it would split the ecosystem since you can no longer compose developments using the regular stdlib vs the polymorphic ones

so this will likely make it even harder than it is today to put together different coq developments -- today you "just" have to resolve all notation and universe conflicts; then you'll also have the possibly unsourmountable roadblocks of libraries not agreeing on *any* type from the stdlib

is there another *other* than making stdlib polymorphic that can help here? something std++ could do to make it not pin universes quite as much? making selected parts of the stdlib polymorphic?

I used to think this is only about `Inductive`

as that's where template universes come from, but now we also have trouble that involves `relation`

, so clearly my own mental model here is still very incomplete.

Robbert Krebbers said:

Out of curiosity, how does Lean solve this problem?

As Gaëtan Gilbert says, the important difference is the existence of algebraic universe expressions and the lack of cumulativity, but there are also some other factors. I definitely disagree with Guillaume Melquiond that lean users need to do more work to handle the universe levels; in fact it's just the opposite, lean users almost never write a universe level explicitly (in either input or output position) and it is surprising to me that the system works as well as it does. Certainly you never have to put universe levels in notations and I can't imagine why you would ever want to do this.

There is a library design principle that a type constructor should have inputs with independent universes and an output that is determined from the inputs, i.e. `Type u -> Type v -> Type (max u v)`

, not `Type u -> Type u -> Type u`

or `Type u -> Type v -> Type (max u v w)`

. This is not always possible, notable counterexamples include the type `Ordinal`

of ordinal numbers, which can't be fixed-universe, and `Category.{u, v} (Obj : Type u)`

which can have morphisms in a different universe `v`

. In these cases you will frequently need to specify the universe explicitly because type inference can't determine it, but luckily these are exactly the areas where users are already clued in to the fact that universe reasoning is important (for mathematical reasons), and in all the simple cases like `Nat`

and `List`

you never have to think about it.

At a technical level, the way it works is that when you write

```
def foo (A : Type _) : Type _ := ...
```

(note: the underscore is required because `Type`

alone means `Type 0`

), universe metavariables are introduced for each `_`

, and the body of the function produces new metavariables and universe constraints, solving some of the metavariables; once elaboration is complete, any remaining metavariables are turned into universe parameters to the definition. I think this procedure is done twice, once for the signature and then again for the body, and any universe parameters generated in the second stage are rejected. This means that a definition will almost never have hundreds of parameters inferred automatically because it's just coming from the type signature, so it would have to be a very complicated type. It does occasionally happen that universe inference decides that a function will have a type like `Type (max u v) -> Type (max u v)`

, but we use linters to catch this because it causes unification issues.

Another technique used in mathlib on top of the above is `Type*`

, which is "`Type u`

where `u`

is a fresh universe parameter". This is used for types in input position like `def foo (A : Type*) := ... `

, and it prevents unification from accidentally deciding that `u`

will be `max ?v ?w`

or `0`

and instead produce a type error. (This is especially common with `(A : Sort _)`

, since then if you e.g. use elimination on a Prop, unification will say that the `Sort _`

will be turned into `Sort 0`

aka `Prop`

, which silently makes the theorem weaker than it was supposed to be.)

Regarding comparing algebraic universes to universe constraint graphs, I think it doesn't make much difference in terms of overall complexity, but it doesn't make a whole lot of sense for the user to be specifying the universes that would normally be algebraic combinations of others. That is, if instead of `Prod : Type u -> Type v -> Type (max u v)`

you have `Prod : Type u -> Type v -> Type w where w = max u v`

(or `w >= max u v`

), it is now more annoying to specify `Prod.{u, v}`

since you have to give `w`

also. But if you do have cumulativity, then you definitely need some mechanism to eliminate all the dangling metavariables when you are done, and they should absolutely not "leak" beyond the definition body into the type because (1) that kills parallelism and (2) that produces weird issues downstream, either because definitions are silently building up hundreds of parameters or because universes become unnecessarily constrained. Any such leakage should be explicitly opted into by the user (IMO) and unexpected leakage should be surfaced as a type error.

I think any plans for an "atomic" switch to universe-polymorphic Stdlib are going to fail for technical reasons explained by Guillaume. A standalone opam package with universe-polymorphic Stdlib at least gives you the possibility of seeing what happens with your development with polymorphism, and enables research on the topic

(note: the underscore is required because

`Type`

alone means`Type 0`

)

ah, so @Guillaume Melquiond was right about this, sorry for my incorrect statements above

thanks @Mario Carneiro for the detailed information!

Karl Palmskog said:

I think any plans for an "atomic" switch to universe-polymorphic Stdlib are going to fail for technical reasons explained by Guillaume. A standalone opam package with universe-polymorphic Stdlib at least gives you the possibility of seeing what happens with your development with polymorphism, and enables research on the topic

yeah, it can definitely be useful for experimentation, but not to solve the issues that production-level libraries like std++ and Iris are experiencing already.

it doesnt have to be "atomic", does it? `option`

and `list`

can be made universe-polymorphic independent from each other, I would hope.

Yep they can be made independent. If you factor in the idea Nicolas mentioned of having ‘implicit’ universe arguments as well, option, list and all covariant containers will behave basically the same as template-poly ones. Then if you use some polymorphic function / lemma in an otherwise monomorphic development that will just fix that particular instance, so it should work pretty seamlessly.

The main difference univ-poly makes is that packed structures can be made polymorphic in the universe of their carrier field typically.

Re: Lean inference, it should be mostly the same here, with the same caveat that on might infer an input could become an aglebraic, but somehow we already tame that because we couldn’t allow it at all before, so variables have a notion if they can be instantiated with algebraics or not (or not instantiated at all)

Thank you @Mario Carneiro for the extensive description how it works in Lean. And good to hear that @Matthieu Sozeau's efforts are moving us into the same direction.

@Matthieu Sozeau Do you have a rough timeline for this project in mind? It would be good to get an idea if it is worth for projects like std++ and Iris to investigate workarounds for Coq's current universes, or we better wait until your work is done.

Thinking a bit more about this, I think the "most" interesting definitions in terms of universes we have in std++ and Iris are the following:

**Telescopes:** (see https://gitlab.mpi-sws.org/iris/stdpp/-/blob/master/stdpp/telescopes.v)

```
Inductive tele : Type :=
| TeleO : tele
| TeleS {X} (binder : X → tele) : tele.
```

**Large ordinals** (see https://gitlab.mpi-sws.org/iris/transfinite/-/blob/master/theories/algebra/ordinals/set_model.v)

```
Inductive Acz@{i} : Type :=
| Asup : ∀ A : Type@{i}, (A → Acz) → Acz.
```

Any idea how these would work out with algebraic universes?

@Mario Carneiro Do you use either of these types in Lean? Any experience with them?

`Tele`

definitely wouldn't work out as well in lean, because all of the embedded types have to be in the same universe. But we could define that (and use `ULift`

), and in that case it would certainly have type `Type u`

, putting it in the category of types that have to be used with explicit universes in most cases

Acz does exist in mathlib, this is docs#PSet and it most certainly needs universes (it's a ZFC model after some quotienting)

I don't think algebraic universes change the definition of those types, but maybe change how easy they are to use without increasing the number of univ variables used

eg a telescope of types in universes a b c can be `@{max(a,b,c)}`

instead of needing `x >= max(a,b,c)`

(and with the way inference works you probably actually get `x >= max(a,b)`

for the sub telescope and `y >= max(x,c)`

currently)

Ease of use is pretty important for telescopes: We use them in many places where we have definitions that involve a variadic number of binders.

Let's ask our telescope experts @Janno and @Ike Mulder: Do you have an idea how this would affect the use of telescopes?

So Lean would also have trouble with something like the free monad? I know in F* it's a pain because it doesn't feature first-class universe polymorphism (meaning not prenex, like in Agda) or cumulativity.

all covariant containers will behave basically the same as template-poly ones.

To be clear to outsiders, I'd like to mention that this feature is independent from the use of univ/sort-polymorphic definitions. We could have *as of today* template-poly types that do not add global constraints when fully applied. With @Gaëtan Gilbert we discussed how to implement this and plan to work on this on the short-term. Hopefully it'd be a cheap way to drastically reduce floating constraints in an essentially backwards-compatible way.

(in particular, template-polymorphism is *already* essentially algebraic universe-aware, in the sense that you can apply template inductive types to parameters with an algebraic sort and it will just work (TM))

Théo Winterhalter said:

So Lean would also have trouble with something like the free monad? I know in F* it's a pain because it doesn't feature first-class universe polymorphism (meaning not prenex, like in Agda) or cumulativity.

Maybe calling it trouble is overstating it; most of the time you don't even need different universes and when you do you can use `ULift`

to bridge the gap.

But more or less this issue is already a problem when just defining a monad, because it has type `m : Type u -> Type v`

meaning that you can only use `m`

on types in `Type u`

, no more no less

If you want to consider `m (m a)`

then you run into problems when `m`

is a variable.

in that case you would restrict `m`

to be `Type u -> Type u`

but it is one place where there is a (surprising?) difference between using `bind`

vs `join`

in the definition of `Monad`

Right, you would need `bind`

to only depend on one universe no?

@Pierre-Marie Pédrot

We could have

as of todaytemplate-poly types that do not add global constraints when fully applied.

IIRC many problems actually arise when such types are *not* fully applied. In particular, a `Monad (M : Type → Type)`

class will equate the universes of all instances `M`

, see https://gitlab.mpi-sws.org/iris/stdpp/-/issues/80

(Jason Gross proposes a hack of eta expanding all instances, see https://github.com/coq-community/coq-ext-lib/pull/123#discussion_r783638620)

Would the work of you and Gaetan help here?

IMO this :point_up: issue sounds much worse than lean's issue that `Monad.{u,v} (M : Type u -> Type v)`

means that `M`

(i.e. a specific monad variable in context) only works on things in `Type u`

, because of the non-local correlations it produces

@Robbert Krebbers Unclear: you'd still get troublesome global constraints for partially applied types, but there would be overall less constraints for the normal uses so less probability of conflicts altogether. Note that I tried at some point to η-expand on sight, but the actual implementation was trickier than it sounded.

IIUC from the bug report, you'd still get a global constraint for the monad statement but all other applied uses would not generate any constraints mentioning that global, so maybe you'd be fine?

If you can infer the problem from the diff of the typing rules, I can give them there.

Otherwise we can just hack the implementation and see where it goes. Clearly the new model would be more expressive (i.e. strictly more programs type-check) so it's a net gain in any case.

I agree that less universe constraints is clearly a net win. Whether it will solve our problems, we will have to see.

IIUC from the bug report, you'd still get a global constraint for the monad statement but all other applied uses would not generate any constraints mentioning that global, so maybe you'd be fine?

The problem is that many uses are not applied. It's is not just the `Monad M`

instance (for whatever variants of `Monad`

and instances `M`

like `list`

and `option`

), but also all the projections and derived definitions. At all of these places where an `M : Type → Type`

as (implicit) argument appears, you somehow need to force an eta-expansion. Doing this by hand using some notation hacks (as proposed by Jason) frankly sounds horrible.

Mario Carneiro said:

`Tele`

definitely wouldn't work out as well in lean, because all of the embedded types have to be in the same universe. But we could define that (and use`ULift`

), and in that case it would certainly have type`Type u`

, putting it in the category of types that have to be used with explicit universes in most cases

ah, I see -- that is good to know. So Coq is indeed solving a harder problem than what Lean does here. Being able to use telescopes without any universe annotations is nice (and we do use this in Iris, in our support for logical atomicity)

The problem is that many uses are not applied.

Sure, but still -- entirely avoiding any entanglement of partially applied uses with fully applied uses could still be a good intermediate step

Ralf Jung said:

Mario Carneiro said:

`Tele`

definitely wouldn't work out as well in lean, because all of the embedded types have to be in the same universe. But we could define that (and use`ULift`

), and in that case it would certainly have type`Type u`

, putting it in the category of types that have to be used with explicit universes in most casesah, I see -- that is good to know. So Coq is indeed solving a harder problem than what Lean does here. Being able to use telescopes without any universe annotations is nice (and we do use this in Iris, in our support for logical atomicity)

Actually I just tried it and lean does a passable job of it with no universe annotations:

```
universe u
inductive Tele : Type (u+1) :=
| O : Tele
| S {X : Type u} (binder : X → Tele) : Tele
def Tele.S' {X : Type u} (binder : X → Tele.{max u v}) : Tele.{max u v} :=
Tele.S (binder ∘ ULift.down.{v})
set_option pp.universes true
#check Tele.S' fun _ : Nat => Tele.S' fun _ : Type u => Tele.O
-- Tele.S'.{0, max u_1 (u + 1)} fun x => Tele.S'.{u + 1, u_1} fun x => Tele.O.{max (u + 1) u_1} : Tele.{max u_1 (u + 1)}
```

The last `#check`

line is using types in different universes, although `u_1`

is probably an undesired universe variable here, playing a similar role to the coq universe on Tele that upper bounds all the universes of the input types

I think there are reasons to believe this support is somewhat ad hoc / brittle though, and I would expect the inequality based formulation to be more robust in this scenario. The core reason for this is that lean doesn't know what to do with level constraints of the form `max u v =?= max u ?x`

, and there aren't really any good options for this either: you either make some approximate choice like `?x := v`

or `?x := max u v`

, or start backtracking; maybe there is some way of turning this into an inequality constraint and solving it at the end?

That’s what the new algorithm does, keeping the max = max constraints as two <= constraints, as it can handle l <= max fine. Of course, if one really wants x to be instantiated there might still be multiple incomparable solutions in the end.

the inequality based formulation

What is that referring to?

`#check Tele.S' fun _ : Nat => Tele.S' fun _ : Type u => Tele.O`

I have trouble parsing this. aren't there parentheses missing?

`#check Tele.S' (fun _ : Nat => Tele.S' (fun _ : Type u => Tele.O))`

yes you inserted the parentheses correctly. The way precedence is set up for functions means that it's not necessary to add the parentheses in this situation (which honestly is quite nice for exactly this kind of continuation-passing API)

Ralf Jung said:

the inequality based formulation

What is that referring to?

What Coq does, basically - rather than maintaining algebraic universes and assignments to them you maintain inequalities between universe metavariables and do consistency checking on them. For lean it would need to be some hybrid of these used specifically for solving max constraints, and consistency checking is not sufficient, you need to actually pick a solution

oh, "inequality" is `<`

, not `≠`

... I always get that wrong^^

sometimes you see "disequality" used to refer specifically to $\ne$, but I'm not convinced this is actually a word

Mario Carneiro said:

yes you inserted the parentheses correctly. The way precedence is set up for functions means that it's not necessary to add the parentheses in this situation (which honestly is quite nice for exactly this kind of continuation-passing API)

it is nice yes, and also non-standard and hence confusing ;)

we just have to change the standards then... :)

I don't think this is a lean invention but I haven't done a census of FP languages on this matter

lean 3 used the Coq precedence and the new one took some getting used to but I think it's a good change

A message was moved here from #Coq users > lambda notation Lean vs Coq by Robbert Krebbers.

A message was moved here from #Coq users > lambda notation Lean vs Coq by Robbert Krebbers.

A message was moved here from #Coq users > lambda notation Lean vs Coq by Robbert Krebbers.

Last updated: Jun 23 2024 at 04:03 UTC