This is some surprisingly negative stuff on Coq in this article (without citation or refererence):

Coq users have formalized a lot of mathematics in its language, but that work has been decentralized and unorganized. Mathematicians worked on projects that interested them and only defined the mathematical objects needed to carry their projects out, often describing those objects in unique ways. As a result, the Coq libraries feel jumbled, like an unplanned city.

this is way beyond Kohlhase and Rabe's recent paper, which I at the time I first read it thought was quite negative:

Coq has reached the usage size, where the central maintenance of libraries is no longer feasible. Instead, the Coq library has been factored into hundreds of repositories with a somewhat standardized build process. This allows distributed maintenance of the library. But it also means that not every repository always builds with the latest version of Coq [...] only around 70 of around 250 repositories could be built

The first quote is surrounded by quotes from @Assia Mahboubi :

Coq is an old man now, and it has a lot of scars,” said Mahboubi, who has worked with the program extensively. “It’s been collaboratively maintained by many people over time, and it has known defects due to its long history.”

And to be sure, I’m skeptical of the implication that Lean solves all of those problems (my hope is Lean 25 ?), but is there something inaccurate about those quotes?

"the Coq libraries feel jumbled, like an unplanned city" at least is a pretty negative, subjective statement, and it seems unlikely that the journalist writing the article has enough experience with the Coq ecosystem to have an informed opinion on this particular point.

But how would one even go about to establish that "Coq libraries feel jumbled"? Do some kind of social experiment? One dictionary meaning of "jumbled" is "to be mixed together in a disorderly heap or mass". I find it implausible that this is how people would describe their experience of looking at or working with MathComp, CoRN, UniMath, HoTT, ...

Also, looking at the research projects behind the development of said libraries, I think "unorganized" is very far from the right word to describe them.

Hi @Paolo Giarrusso @Karl Palmskog . I have indeed been interviewed by this journalist for Quanta. And I indeed said (approximately at least) what is attributed to me in the quotes.

Then I have been contacted by a "fact checker", who asked me to approve or amend a few excerpts from the draft, those which would be attributed to me.

It was not possible to see the entire article before publication, and from the title only I could not infer that it would in fact be about marketing Lean.

In particular, as you can imagine, I also said that Lean would eventually face similar issues as Coq in a few years from now. And that mathlib cannot be compared in size and width to the Coq ecosystem. But this is not in the final text of course.

Karl Palmskog said:

But how would one even go about to establish that "Coq libraries feel jumbled"? Do some kind of social experiment? One dictionary meaning of "jumbled" is "to be mixed together in a disorderly heap or mass". I find it implausible that this is how people would describe their experience of looking at or working with MathComp, CoRN, UniMath, HoTT, ...

@Karl Palmskog I am not a native speaker, and I learnt the word "jumbled" from this article. My English-French dictionary suggests something like "disordered" and honestly, regarding many topics in undergrad maths, I think it is. E.g.: what would be the right library for working with multivariate polynomials? This is very elementary stuff for a math-related formalization project. Yet the question is of course tricky because there are several possible answers, and several relevant data structures, depending on the intended usage, etc. But this article (and much Lean-related communication material) somehow swipes this under the rug. On the other hand, Coq is not (yet) in the best possible shape, which is why I would say the sentence is not inaccurate.

I would be fine with a claim along the lines of what Assia says, e.g., that there are many and potentially confusingly different ways of formalizing even elementary math notions, which naturally affects Coq library development. But to me this is not at all the impression given by the (non-quote) paragraphs talking about Coq library disorganization and jumbledness. I'm not a native English speaker either, but I can't think of any context where "jumbled" would be something positive or even neutral.

@Assia Mahboubi the link accepts comments, and one indeed asked "why would Lean not have the same problems"... Maybe you can follow up ;-)

that quote indeed doesn’t match my impression of MathComp, but most Coq code is not that.

The thing that I find a bit unpleasant is that you could remove any paragraph about Coq in there and that article would just be fine. It seems a bit gratuitous to cite it just to say "it's widely used but has problems, let's start afresh". I guess I'm allergic to marketing.

To the extent the health of Coq relates to having users, it might be vital to switch to doing more of your own marketing.

I would say formalization of math is currently in a time as life was in the Cambrian - there is kind of an explosion. Of cause it is effort and sometimes even a pain to try different ways of doing things, but it can also be quite enlightening.

I would say an approach which upfront restricts developments would only lead to a better solution if we would already be at the end of the Cambrian and know what are the right ways. I have severe doubts that this is the case. I would say we are right in the middle of it - maybe even still at the beginning - and it will likely take at least quite a few decades until things stabilize.

I expect that the useful lifeforms in the Coq ecosystem enter some kind of symbiosis and grow and develop together, while other lifeforms will vanish. Life has shown that this approach is very successful.

Assia Mahboubi said:

It was not possible to see the entire article before publication, and from the title only I could not infer that it would in fact be about marketing Lean.

Nobody was allowed to see the entire article before publication, or even know about the global topic of the paper. I don't think the author had a very precise idea either when he started interviewing people. My understanding is that the entry point was the perfectoid spaces project that was noticed by the author (probably from Twitter or something like this). I think he started interviewing people and then asked each interviewed person to give ideas about further people to interview, and he built up a story along the way. I was interviewed and insisted on the existence of other proof assistants, especially Coq and Isabelle, and I suggested interviewing Assia. I'm sad to see that Coq is described only with rather negative comments. But note that Isabelle is not even mentioned. Ironically, the author came up with the same historical reference as Paulson's grant Alexandria but without mentioning Isabelle. I hope that readers will mostly get the idea that formalizing mathematics is fun and could be useful.

Note that Lean marketing material was previously published by the same author: https://www.quantamagazine.org/at-the-international-mathematical-olympiad-artificial-intelligence-prepares-to-go-for-the-gold-20200921/

I understand the allergy to marketing, but I'm afraid the consequence is that we are very weak in our communication compared to Lean.

Maybe we should think about hiring a professional writer, and organize a more ambitious/aggressive communication like they do.

Maxime Dénès said:

Maybe we should think about hiring a professional writer, and organize a more ambitious/aggressive communication like they do.

Is the author of the Quanta articles hired? (I can imagine that journalists interested in this area do not have to be hired, but are rather happy about being approached by experts that want them to write about their field. But I have no real idea how their business works, it's just a guess.)

We could also try to address some of these issues that were raised, although overemphasized. E.g. this discussion about a core library got stalled.

Yannick Forster said:

Is the author of the Quanta articles hired?

I don't know, but the effect seems to be pretty similar, i.e. he publishes Lean marketing material regularly and the text reads very much like the communication made by the Lean team during conferences.

As for concrete work on libraries, I have an attempt at modernizing the prelude, as a start here: https://github.com/coq/coq/pull/11747

Unfortunately, it reveals many unfinished changes in the system (porting higher layers to universe polymorphism, bugs with primitive projections), etc. We were talking this morning with @Gaëtan Gilbert about how to organize/prioritize/distribute this work.

I spoke to Kevin Hartnett (the author); I didn't have anything to say about Coq, and only talked to him about my video demo from LftCM, and about the metaprogramming framework (which in the end was barely mentioned). Just as Patrick said, I'm sad that there were unnecessary negative comments about Coq in the article. :-(

That said, talk of the author being hired is quite off-base. There had been some discussion about telling someone else we knew at Quanta about LftCM, but as far as I know we didn't even get to doing that before Kevin came to us.

I think it's clear that what @Yannick Forster was asking was whether Hartnett is a staff writer or freelance writer at Quanta.

Not so clear to me:

Yannick Forster said:

Is the author of the Quanta articles hired? (I can imagine that journalists interested in this area do not have to be hired, but are rather happy about being approached by experts that want them to write about their field. But I have no real idea how their business works, it's just a guess.)

As to the other question, my understanding it that Hartnett is a staff writer at Quanta, but I didn't ask specifically. They seem to be pretty well funded by the Simons Foundation.

By no means I wanted to imply that Hartnett was hired to write a nice article about Lean or anything in this direction :) I was responding to Maxime who suggested to hire a professional writer for communication. I assume that Kevin Hartnett is an independent journalist (freelance or staff doesn't matter so much), and I think the best possibility to get positive reporting on Coq is to get an independent journalist interested rather than paying a professional writer

Statements in quotes directly from scientists would at least be worthwhile to debate (even in peer-reviewed venues), but since the "unorganized"/"jumbled" part is unquoted and unsourced, it seems responding is not worthwhile. On the other hand, maybe we should ask to see the "meticulous" research behind it? (https://www.quantamagazine.org/about/)

all of [Quanta's] resources go toward producing responsible, freely accessible journalism that is meticulously researched, reported, edited, copy-edited and fact-checked.

Kevin Hartnett contacted me in late July and asked if he could talk to me about Lean for a Quanta article he was doing research on. Like the others, I didn't see any version of the article before it appeared. I knew he'd talked to Assia and Patrick Massot.

To be clear, I also didn't want to imply anything about the status of the author. I had in mind that we generally don't have the time/resources to put together "promotional" material or even talk to the press, publish blog posts, etc. And that we could hire someone for this kind of tasks.

Researchers working on Coq produce a more science-oriented communication than this kind of large scale promotion material (and I'm certainly not complaining about it), engineers are very busy with refactorings/infrastructure/new features, so I thought it could make sense to strengthen our communication by having at least one person working on it explicitly (even part-time).

For what it's worth, Kevin Harnett wrote me explicitely that he would welcome suggestions about research that I think Quanta should consider covering.

Please keep in mind that for mathematicians working on formalized mathematics, the Quanta article is a blessing. The computer science community is used to formal methods, but to the vast majority of mathematicians, it is purely computer science. It's extremely difficult -- nearly impossible -- for even a very good mathematician to get funding to support e.g. students or postdocs working on formal mathematics. And for young mathematicians looking for jobs, spending time on formalization is like the kiss of death. But Quanta is very respected in the mathematics community, so articles like this, which says "this is exciting and important for mathematics," can have a big effect.

Hopefully this will open the door to articles about Coq, Isabelle, HOL Light, Agda, PVS, Metamath, ACL2, Mizar, Matita, HOL4, ... But the big picture is that for those of us who care about formal methods in mathematics (not just computer science!), this is a really important landmark.

Enrico Tassi said:

Assia Mahboubi the link accepts comments, and one indeed asked "why would Lean not have the same problems"... Maybe you can follow up ;-)

Done. Thanks for pointing to this comment.

Promotion of formal methods and proof assistants in this kind of media is great, but I'm not sure one can support a grant proposal for some Coq-based research with an article which says that it is half-dead...

Jeremy Avigad said:

Hopefully this will open the door to articles about Coq, Isabelle, HOL Light, Agda, PVS, Metamath, ACL2, Mizar, Matita, HOL4, ... But the big picture is that for those of us who care about formal methods in mathematics (not just computer science!), this is a really important landmark.

I agree that the article is a good thing for proof assistants and the formalization of mathematics, no questioning that. And I'm perfectly OK with it focusing on Lean and mathlib, and ignoring other libraries/tools. But I still find the Coq bashing, well, gratuitous at best. Unfortunately it is not the first time we see this happening in an article or a talk about Lean. Particularly in this case, it seems really odd to me that an independent journalist decides to write that out of the blue. Let's hope it does not happen again, it's really not pleasant to read that in the middle of an otherwise great article.

Does someone have an overview how large the "nicely ordered LEAN library" is compared to the "Coq ecosystem"? If it is order of magnitude wise more like mathcomp and the larger developments based on it (living up to mathcomp's standard), it is not much of an argument that the LEAN library is nicer. Or is it even nicer than mathcomp?

That’s a good question; it’s unlikely mathlib is more consistent than mathcomp. OTOH, since they use typeclasses, setting up algebraic hierarchies might be easier than “raw mathcomp”?

Probably the easiest way to get a sense of what's in mathlib is via https://leanprover-community.github.io/mathlib-overview.html and https://leanprover-community.github.io/undergrad.html

From the Quanta website: "Quanta Magazine is an editorially independent online publication launched by the Simons Foundation to enhance public understanding of science."

it could make sense to strengthen our communication by having at least one person working on it explicitly (even part-time)

A first step is coming up with the messages, such as: What are the goals of the Coq community? What are we doing to achieve them? A roadmap of how we'd like to improve Coq might help, too. Even though I've been contributing code to Coq, I don't have a great answer to these questions.

And, of course, an explanation of the significance of the goals.

Michael Soegtrop said:

Does someone have an overview how large the "nicely ordered LEAN library" is compared to the "Coq ecosystem"? If it is order of magnitude wise more like mathcomp and the larger developments based on it (living up to mathcomp's standard), it is not much of an argument that the LEAN library is nicer. Or is it even nicer than mathcomp?

As of May 2020, Mathlib was around 170,000 lines of code (LOC) per this paper. This is roughly equivalent to the core MathComp projects (MathComp, finmap, four-color, odd-order) as of May 2019, which had around 164,000 LOC (see our statistics table for projects related to MathComp 1.9.0).

So if we look only at mathematics formalization, the size of the Coq ecosystem is at least one order of magnitude larger than Lean (when adding CoRN, HoTT, UniMath, etc., to MathComp's ecosystem, which notably includes Gaia with ~150,000 LOC).

Based on a preliminary analysis of projects on GitHub, I conjecture that the general Coq ecosystem is about two orders of magnitude larger than Lean in terms of lines of code. But this does not take into account what the projects are about or their quality.

A factoid that we noted recently is that as a programming language on GitHub, Coq recently surpassed Standard ML in popularity (see this report).

I'd love to see https://leanprover-community.github.io/undergrad.html for other theorem provers (preferably using the criteria "you can simultaneously import all the following mathematics", rather than "these exist somewhere, but possibly in incompatible projects"). The mathlib website generates that page from a yaml file https://github.com/leanprover-community/mathlib/blob/master/docs/undergrad.yaml. The list of topics was extracted from some version of the standard French undergraduate curriculum, linked in the yaml.

The only way Coq will have anything similar to Mathlib undergrad math that can be "imported simultaneously" is via the Coq platform, which is definitely not going to happen overnight. As can be seen in Madiot's Coq list of "100 theorems" popularized by Wiedijk, Coq math spans over dozens of projects with different trajectories.

Wait, does the Coq platform satisfy the “simultaneously import” test, literally speaking? Because I expect not, thanks to notations...

I’d assume @Scott Morrison meant “can usefully import together”, for values of “usefully” ranging up to mathcomp...

I guess I don't mind much about incompatible notations as long as it's possible to turn them off (or just decline to use them) and still have access to all the theorems in one file. Part of continuous integration for mathlib is a file that essentially does `import *`

, to check for name conflicts, etc, across the entire library. That doesn't in itself guarantee that notations will still be usable as expected if you import everything.

I'm definitely not saying that "import simultaneously" is the most important criterion, and certainly not one that should be used as a sole criterion for judging different theorem provers "against" one another. mathlib definitely makes an effort to optimise for this criterion. I'd still be very interested to see where all the different active theorem provers get to against "a standard undergraduate curriculum" at this point. So knowing what fraction of that list is covered in particular in mathcomp would be really useful.

More generally, it would be good to have some better yardsticks than Freek's list. Obviously it has been an interesting list, and it has motivated a lot of work, but from a research mathematician's point of view it's a bit of a strange list. (No disrespect intended -- I think Freek would agree!)

on the Lean zulip we had a brief discussion about Oliver Knill's list http://people.math.harvard.edu/~knill/graphgeometry/papers/fundamental.pdf

Progress against that list would be much more relatable to mathematicians!

Note that "import simultaneously" is not only about typing "import" or "Require", it means you can actually use things together. The perfectoid spaces project did use a lot of general topology and commutative algebra together in a very intricate way. The ongoing sphere eversion project starts using a lot of linear algebra, affine geometry, general topology, differential calculus, integral calculus and differential geometry together (they are not yet all interacting in the current early stage of the project).

I agree that the article is a good thing for proof assistants and the formalization of mathematics, no questioning that. And I'm perfectly OK with it focusing on Lean and mathlib, and ignoring other libraries/tools. But I still find the Coq bashing, well, gratuitous at best. Unfortunately it is not the first time we see this happening in an article or a talk about Lean. Particularly in this case, it seems really odd to me that an independent journalist decides to write that out of the blue. Let's hope it does not happen again, it's really not pleasant to read that in the middle of an otherwise great article.

I assume that this is at least partially directed at me. What I said in my Microsoft talk was unfortunate and I had no idea that the video would blow up like that -- I know I caused some bad feelings and I am sorry. Since then I have tried to take a much more balanced point of view, challenging all the modern provers to do things which people working in "mainstream mathematics" (number theory, geometry, topology, algebra, analysis -- the kind of areas where the Fields Medals are awarded) will be interested in. Funding agencies will surely be interested in multidisciplinary proposals -- in the UK at least they are explicitly encouraging stuff like this.

However the entire area is plagued with this huge problem that the kind of mathematics which is being done in the systems is simply completely unrelated, in general, to the kind of mathematics which is being done by the researchers in most serious maths departments. We have this informal goal in mathlib to make an undergraduate curriculum but this is not particularly because we think it will be an impressive aim -- most mathematicians would regard such an achievement as an *absolute bare minimum* before a system such as this could be taken seriously. To a research mathematicians, essentially everything on an undergraduate curriculum could be dismissed as "trivial" (even though as formalisers we know that some of it is anything but).

As for Théo's comment, I am not (yet) convinced that funding agencies are going to take Quanta articles particularly seriously when it comes to decision making. I am far more concerned that when I write proposals to maths funding bodies I am reduced to saying "let's formalise things we know already", when 5 years ago I was writing proposals saying "let's create new mathematics". It seems to me from looking at recent ITP conferences that Coq and Isabelle are doing absolutely fine right now, with people using these systems to do things they do well.

@Scott Morrison the sad truth is that incompatible notations are a `Import`

-time error (https://stackoverflow.com/a/49387168/53974)

However, they are not a `Require`

time error, so you can still use lemmas by their qualified names.

Yes, I’m happy I misremembered. Still, this is just the earliest obstacle when mixing different designs.

Paolo Giarrusso said:

Wait, does the Coq platform satisfy the “simultaneously import” test, literally speaking? Because I expect not, thanks to notations...

I hope that the Coq platform will help to get such things fixed since now one can reasonably consider such incompatibilities to be bugs. I personally use the majority of the stuff in the Coq platform (as is today) together in my developments, so there is at least some compatibility. The only really bad thing I bumped into so far was that CompCert used a verbatim copy of Flocq, so that the definitions in Flocq used e.g. in Interval and in CompCert+VST where equal but from Coq's point of view different. This I fixed recently.

Something I might bump into soon are the definitions of real numbers - I will see.

With type classes and canonical structures I didn't have severe issues up to now. If you mix both with heavy proof automation things will go wild, but so far I could easily fix this by giving the structures explicitly, which usually makes the automation more maintainable as a side effect (things fail earlier and with more usable error messages).

Where do you see the major issues?

@Michael Soegtrop I’m not sure I have much to add — everything has been discussed extensively when discussing e.g. stdlib2. It probably depends what level of compatibility you expect. I expect the platform includes both mathcomp and non-mathcomp versions of X for many Xes (at least for everything in stdpp), and I assume neither subsumes the other. If we agree on that, I don’t think there’s much to add.

To be a bit more explicit, you can probably often combine math-comp with non-mathcomp libraries with enough effort, it’s just annoying enough to not deserve the label “compatible”.

Also, some combinations are _technically_ impossible today — for instance, mixing setoid rewriting with canonical structures is blocked by Unifall. Even `Hint Resolve`

is not always reliable when combining CS and typeclasses, so Iris (which needs the combo) uses workarounds quite often.

also, all the problems I described affect iris-style proofs, where the only automation is through TC inference (it’s basically “how to make proof automation less ad-hoc”, but implemented with TCs instead of CS) — IIUC, that’s mostly math-comp style proofs.

BTW, arguably, Coq’s stdlib doesn’t play well with itself: https://mattermost.mpi-sws.org/iris/pl/pof9y1t453fimcg5zix3ysbwhh

(strictly speaking, that’s a problem between stdpp and ssreflect, but declaring `Is_true`

as coercion is suggested by comments in the stdlib itself)

The iris mattermost does not seem publicly available. Any chance of moving the iris people to zulip?

zulip is login walled too

Just saying that linking to a mattermost discussion without saying what's there may not reach the biggest audience.

What is iris style proof ?

Good point @Bas Spitters , I hadn't realized. My link goes to:

me:

```
theories/base.v:Coercion Is_true : bool >-> Sortclass. (*stdpp *)
theories/ssr/ssrbool.v:Coercion is_true : bool >-> Sortclass. (* Prop *) (* Coq *)
```

Joseph Tassarotti:

Indeed, ssreflect and stdpp do not play together well at all. There are some conflicting notation levels as well, if I recall correctly, which is even more of a show stopper.

(I mean the ssreflect/mathcomp libraries, that is)

@vlj I guess for this discussion "Iris-style" is the style used in https://iris-project.org/, which mostly means "math-comp style but using mostly typeclasses, except where canonical structures are necessary for performance"

Ok

And to be clear: the only barrier to that Mattermost is creating an account via github (like here) — I still agree that's not ideal for my link :-).

Paolo Giarrusso said:

vlj I guess for this discussion "Iris-style" is the style used in https://iris-project.org/, which mostly means "math-comp style but using mostly typeclasses, except where canonical structures are necessary for performance"

Do you have some adventure travel report about using mathcomp with mostly type classes?

Since the iris library rewrite was started by @Robbert Krebbers I believe, perhaps math-classes style is more accurate?

Robbert worked on math-classes before specifying C and then starting on iris.

Since the iris library rewrite was started by @Robbert Krebbers I believe, perhaps math-classes style is more accurate?

Not really. Whatever @Paolo Giarrusso calls Iris-style is really a combination of math-classes + math-comp style. Iris uses canonical structures for algebraic structures instead of the math-classes-style unbundled approach, since the latter blows up massively for larger structures. However, for lots of the automation, Iris uses type classes in a style inspired by math classes.

FWIW, Inria's communication service collaborates with The Conversation for research popularization.

but since the "unorganized"/"jumbled" part is unquoted and unsourced, it seems responding is not worthwhile. On the other hand, maybe we should ask to see the "meticulous" research behind it?

For all we know, it could just be the journalist reformulating a sentiment he has heard from multiple separate sources and did not attribute to anyone in particular as it seemed more of a "common sense" universal truth. And I can definitely see how this can happen: we are, after all, comparing *one* library that has somewhat come to embody the whole of the Lean project vs. all of the Coq libraries ever. It is inevitable that one will look more organised than the others. The anecdote about Lean's original author telling mathematicians to go do their thing in their own corner because he did not have time to deal with their PRs & issues could have been an opportunity to ask: are there any other Lean libraries targeting other applications (e.g. certifying programs like the author intended)? This would have given a more "jumbled" vision of the Lean project too. But the article was specifically about formal methods for mathematicians so I can see how that'd be out of scope.

I think Gabriel's message on the mailing list was spot on: time will be much better spent (stealing features from / provoking feature envy in) other formal methods projects and praising Coq's numerous tour de forces rather than nitpicking the article's details (that will quickly be forgotten anyway).

With collaborators, I've looked at various software metrics of Coq libraries, so if someone in the ITP community goes on record stating something about the (dis)organization of a certain Coq library, this is something we could actually investigate and publish on. The SE community has done quite a lot of work in this direction. This is why the unsourced claim was disappointing to me.

@Karl Palmskog Coq moved up a bit in the redmond index. Mostly, it became more popular on github.

@Bas Spitters ah, you mean in this report? https://redmonk.com/kfitzpatrick/2020/07/29/redmonk-slackchat-june-2020-programming-language-rankings/

@Michael Soegtrop as “travel report for math-classes + math-comp”, the key is probably the following from https://gitlab.mpi-sws.org/iris/iris/-/wikis/coq-bugs, whose fix is part of unifall > [HIGH] #6294: apply (including TC search) and canonical structures don't mix very well, so we randomly use apply: and cannot grow our algebraic hierarchy any further.

Maybe something for the CUDW?

I've been working on unifall with Matthieu in order to fix this issue for months now.

The main cost is that unifall fixes bugs in `apply`

and `eapply`

, which need to be propagated on our gigantic CI code base

I mean that a lot of developments in our CI rely on those bugs

@Maxime Dénès Do you mind reminding me (or linking some PR) what is unifall?

It's a project https://github.com/coq/coq/projects/26 which aims at unifying the two unification algorithms that Coq has

the issue of `apply`

and canonical structures is related to the fact that `apply`

uses an old unification algorithm

Ah I see, great thanks!

the PR I'm currently working on is: https://github.com/coq/coq/pull/991

in order to fix Iris' pb, one more is needed after that, to port `eauto`

but in practice, our CI is full of developments with unstructured scripts and fragile automation

so the cost of such changes is huge

To be sure, I wasn’t complaining on the unifall work, I know it’s hard! Wasn’t there a proposal to only use Unifall under a flag?

Thanks for the great work @Maxime Dénès !

@Paolo Giarrusso : thanks for connecting the dots between TC, canonical structures and unifall!

Last updated: Jun 13 2024 at 04:03 UTC