An interview with Kevin Buzzard about Lean in Quanta magazine: https://www.quantamagazine.org/can-computers-be-mathematicians-20220629/
This topic was moved here from #Coq devs & plugin devs > Kevin Buzzard interview about Lean by Karl Palmskog.
while this is definitely something worth highlighting, it's not directly related to Coq dev/plugin dev (hence the move). Or did he talk directly about Coq development?
Coq does not appear on the page AFAICT; this might be more interesting on "how to spread Coq more" — right at the outset it says "[Scholze's proof] was kind of a big public relations win for this software".
one might think that the ACM award(s) would help with PR, but I don't think I've observed much of an effect
I mean, why would mathematicians care?
as Buzzard points out, if you talk to mathematicians, and especially if you want their grant money, you shouldn’t talk about strong normalization, but give empirical evidence your tool helps their goals (however misguided they might be, this is still a good strategy to get their ears and maybe fix them).
he very explicitly says that is necessary to get funding, so I suppose he wants to get LOTS of funding from mathematicians (maybe not just for himself, but certainly for Lean)
Last year you posted about https://www.cmu.edu/news/stories/archives/2021/september/hoskinson-center-for-formal-mathematics.html. I wish Lean all the success, and it need not be at the expense of Coq, but if it is, maybe that’s something the Coq community should talk about.
quanta again? really?
for the record, the most high-profile Coq math release in the recent two years is arguably https://github.com/math-comp/Abel - but I doubt that has/will grow enthusiasm for Coq in the math community (due to the Abel math being "old")
Coq demographics leans [pun not intended] towards CS, type theory, maybe category theory, I'm not sure if that can change in near-to-medium-distance future
We can't ignore the momentum,but this magazine is too biased to be taken seriousy IMO
CS+ITP is seemingly never trending. To be honest, I think HOL4+CakeML is currently well ahead of Coq+CompCert+MetaCoq+CertiCoq in terms of program verification features and TCB, but HOL4 and "momentum" are concepts far apart.
I'm personally fine with CS+ITP never trending, but there is always the chance that all funding goes in another direction
(The actual Scholze story is more interesting than the pop-science version on Quanta, but we should resist the impulse to criticize the weak points)
"funding goes away and Coq withers/dies" is my fear, yes
But hopefully, not justified, especially since AFAIK Lean can't afford anything close to Coq's diversity: my code couldn't go in mathlib
How come quanta is biased, have they rejected a story about Coq?
Who's doing the work Kevin is doing for Lean in the Coq community?
do you think the following is unbiased? There is no source for this, it's not a quote:
Coq libraries feel jumbled, like an unplanned city.
Maybe you should try to submit an article that claims Coq is great by citing out of context members of the Lean community.
right, see for example Assia's comments on being quoted in the previous Quanta topic
Maybe I should submit an article, and see if there is bias indeed. Until then, I would refrain from being dismissive of Quanta which is a magazine of much wider scope than this
Coq libraries do feel jumbled by the way
big time
when I was introduced to Coq in 2012, already that was a wide complain in the US
to put it other way, they are a mess
@Karl Palmskog I am very well aware of the previous article
it is not what I'm talking about, that's not how claim of bias work
first, Coq people would have to try to do even a bit of marketing
it is fine if they don't want to do marketing, but then don't claim bias when others do worry about teaching their tool to the wider research community
and are active in social media etc...
to say without quoting someone that something "feels jumbled" belongs in opinion columns, not serious articles
would you accept without complaint that language/claim in a POPL/PLDI/etc. submission you're reviewing?
That claim in Quanta seems pretty accurate to me, so if something I have to props the writer
certainly would I see that in paper, it would depend a lot on the context
but it would be me who would have a very hard time arguing in the contrary
anyways that is not my point
it is very offtopic for this article
I asked two questions:
I see not a lot of self-reflection around TBH
we could use a bit more TBH
Not sure I see any reason for self-reflection in this. Assume someone, somewhere makes an idiosyncratic Coq library that is guaranteed to be called out for jumbledness by 100% of Quanta journalists. The author then puts it on GitHub and relies on Coq compatibility policy to keep it alive for a decade or two. Does the Coq community now need to step in and make it idiomatic or tell the author to refactor it?
If you want a consistent library, there are now several to choose from in the Platform, and they are likely to be around for "a while" (decades?) in some reasonable form.
Indeed it seems people don't see the need to analyze a few things. The example you put indeed tells me little in terms of what to reflect about it. People do random libraries in Lean all the time, however for math, they seem pretty well organized.
What I wonder is why mathematicians are not using Coq when it had a very large advantage in terms of tech and timing, why, despite investing significant resources, our implementation is stuck in several fronts, why Quanta is publishing about Lean and not Coq, etc....
one thing the original Quanta article taught me: in interviews about research, insist on providing written answers to questions, that must be written out in full when published or not included at all
and ask for money
In part there is a cathedral and bazar issue here. With Lean taking the former and Coq taking the latter approach.
Or, as Andrej puts it Yahoo vs Google.
https://github.com/formalabstracts/formalabstracts/issues/44
The coq-community has done great work to make the bazar model much more manageable.
We just need more and more people to start using these tools.
IMHO the Coq model is better for industrial applications, because there is a certain pressure to keep things compatible. Sure one can get a nicer and cleaner library with a mono repo approach where one can make substantial changes as long as everything is synchronized. But basing derived work on such an infrastructure can be a lot of work. I wonder if substantial derived work can be maintained over say a decade outside of the Lean repo.
With Coq I had so far little trouble keeping my own stuff in sync with Coq - it is usually less than a day of work to adjust my work (that is everything I wrote since 8.5 or so) to a new Coq Platform release. Not sure how my milage would have been with Lean in the same time frame.
I think many comparisons of Lean and Coq are mistaken when they try to match one with the cathedral model and one with the bazaar model (or said otherwise with the "unplanned city" quote). Lean itself is developed in cathedral mode, but I think that mathlib is developed more in bazaar mode. While Coq itself is developed in bazaar mode, but math-comp has been developed in cathedral mode (although the entire ecosystem is uncoordinated and may thus feel like bazaar, but bazaar is usually applied to a single project, not to an entire ecosystem).
Thanks @Théo Zimmermann that pretty much sums how I see things; for me the problems in Coq are not so much related to the development model, but more to the failure to provide users with some essential features (such as a richer base library, or an object system to manage classes of properties)
thus it is common to see really painful code around, reimplementing a lot of basic stuff
Bazaar indeed is likely limited if you need to do a deep reachitecturing of the system, and this is bad for Coq because I think it needs one
BTW, if we're comparing with mathlib, we should compare math-comp — which seems far more regular than "all of Coq", probably faster than Lean 3 (who knows about 4), etc...
Yeah, that was precisely my point. When comparing mathlib vs math-comp, the bazaar is not on the same side.
(Not a critic though.)
also important: Lean follows neither cathedral nor bazaar, it follows the "don't use it for real, it's a prototype" model.
@Emilio Jesús Gallego Arias with object system, do you mean type classes?
Does the hierarchy builder solve some issues of that?
Do we really understand the bazar model for mathematics? Unless we want something completely disjoint, I think we'll want some kind of univalence or parametricity to connect different representations of the same thing.
Currently the support for that seems better in Isabelle than in Coq.
I think many Coq libraries could have manual or semi-automatic bridges between them to transfer results. Pierre Castéran worked on a manual bridge between different encodings of ordinals: https://github.com/coq-community/hydra-battles/tree/master/theories/gaia
However, this [bridging specific encodings with or without tool support] is probably as cold a topic as it gets. Even with Paramcoq in the Platform, its use remains niche.
I think the topic is still hot, or at least hott, there's good support in cubical agda, but with primitives not supported in Coq (yet?)
Even if you had the equal types, you end up needing to restate lemmas just to bridge between ecosystems.
And AFAICT, migrating to univalence doesn't look like "add an axiom" but more like "start over everything from scratch"... Except I'm not sure it ever happened
Yes, UA won't be a small change. My question was just whether we believe we claim to understand how the bazar model should work for mathematics, say, based on opam.
One possibility is that you will have multiple interfaces for related concepts (e.g. decidable equality) that do not interoperate well. That is what we already see in Coq libraries, and what we see in usual programming ecosystems...
the current bazaar model is unfortunately mostly about people putting Coq math code in an ever growing number of GitHub repos, compatible with different versions of Coq (if any), and not really telling anyone about them. The Platform helps by coordinating some of the more basic/foundational libraries. But the Platform can't account for more exotic math and one-off proofs
"unfortunately"?
the problems are: (1) not compatible with the same Coq version(s), (2) nobody knowing that they exist (not in any archive like AFP). That's what the "unfortunately" refers to.
there are other issues of course, like homemade build scripts that don't work, ad-hoc dependency management, no explicit (open source) license, etc.
For libraries that would be useful in the ecosystem, agreed.
OTOH, expecting that all programs in some language must live in some central archive still seems strange — you explained it to be once but clearly I wasn't convinced :sweat_smile: .
I don't mind if there are several archives managed by different people. But right now you are basically forced to use GitHub code search to find elementary Coq math code
I also think there is a difference between "registering" that your code exists in an archive, such as via opam metadata, and actually putting code in a central repository, like what they do with AFP
for Coq code produced by people paid by tax money, it seems obvious that you should make others aware of the code by registering it in an archive or two, so taxpayers get some value out of the whole project
Bas Spitters said:
Emilio Jesús Gallego Arias with object system, do you mean type classes?
Does the hierarchy builder solve some issues of that?
Yes, I mean a common system to implement objects in Coq; we have two and the two have problems, HB indeed may solve that, but it is not part of Coq, moreover it has some non trivial engineering challenges (depends on a full-blown LProlog interpreter, requires learning a new language, etc....)
Compare how easy is in Lean to say "this is my new type, please derive decidable equality for it"
It's almost as easy with TCs in Coq — modulo exponential blowups with TCs hierarchies... IIUC Lean's hash-consing fixes that
Last time I looked at Lean, out of the box you could do [@deriving equality]
or something like that, how do I do that with TC?
that needs a metaprogram in both languages — but with TCs it would only generate one instance, like in Haskell
so the metaprogram is easier to write and to debug for its authors, and probably also for the users
Sure, but I mean, it's almost as easy in Coq
I haven't tried HB but in many cases generator output does leak to users
seems to stretch "almost" a bit
Paolo Giarrusso said:
generator output does leak to users
What does this mean?
I'm saying that
#[global] Instance: EqDecision Client_metadata.
Proof. solve_decision. Defined.
is almost as easy as [@deriving equality]
; I'm sure you could bridge the difference with elpi.
This almost works already IMO #[global] Instance: EqDecision Client_metadata := ltac:(solve_decision).
deriving
should also generate statements like #[global] Instance: EqDecision X -> EqDecision (container X).
. Also deriving A, B, C
is more compact than stating those separately
@Enrico Tassi re "leaking", I'm wondering how leaky is the HB abstraction. While it's nice that you can generate N hints, is it possible that in some cases the user the user will have to _understand_ something about those N hints? They might come up when debugging traces at least.
Well, it is as tight as a Coq abstraction can be. I mean, we put stuff in inner modules to hide it. If you import these, or explicitly say Builders__11.foo
then you will be punished ;-)
(and by design if 11
is kind of random)
I'm not worrying about that — but do proof terms mention Builders__11.foo
?
and while "as tight as a Coq abstraction can be" is cool and impressive, I think in this thread it's fair to ask if it's also as tight as a _Lean_ abstraction can be.
Yes, CS instances mention these constants. And they must be transparent. Typically the user does not see them (implict args) but if you set printing all and the unfold them (they are decently named, and have long names by design) then you see the curft
The best I could imagine is to instruct the pretty printer and stuff like unfold to insist not doing what you ask.
Type theory is not helping at all, it must be an "extra logical" sealing
I'm personally OK with the "python way", if something is called __private
then you don't use it. It is what we try to do in HB when we have no other option.
hiding those from pretty printing sounds like what people regret about primitive projections, so please ask PMP before doing it
but again, I suspect this wasn't the question. Let's assume HB is the perfect solution within the constraints of Coq!
Not hiding really, I mostly would like to avoid copy/pasting these terms, since their names are not part of the "API"
_for this thread_, I'm still wondering how that compares with Lean
To be entirely honest it seems to me that this discussion once again comes down to documentation + UX. As Emilio points out, Lean has a well documented form of deriving equality (that any introductory user can understand what's going on), but we if you want to do it in Coq you'd need to come to the Zulip to figure it out and rely on some lifting of type classes instantiations.
@Pedro Abreu UX and documentation in Coq are not ideal, but some of those problems aren't fixable purely via UX
it's highly unlikely that any way of deriving instances will be the "official" one in Coq, so documentation will be specific to type classes and canonical structures, and to specific libraries which enhance those instances
maybe what someone will do is to make a custom Coq for typeclasses with Stdpp, and then bundle it with a bunch of stuff and docs (a specialized Platform)
on the MathComp side, it seems likely we will eventually get a workflow for Coq+MC without the stdlib - so in practice the only way to work there will be via canonical structures (which may get instance derivation via HB, Elpi, etc.)
@Paolo Giarrusso I understand that, and it's ok. My main point here is that this sort of usability seems to be a main concern for Lean to begin with, and that gets them good points with new users. I don't know much about lean, but that's the impression that I'm constantly having, specially when we talk about usability for mathematicians, no?
I agree with the observations, but given the traffic on Lean's zulip I imagine good doc and ergonomy is not the ultimate solution, even for them.
This does not mean Coq should not get better doc, even if there can't be just one, since the ecosystem is less uniform, for the good and the bad
docs aren't perfect but I don't think that's the claim
I guess _one_ possible question is (a) will Lean's momentum overtake Coq? (b) Is that a problem, and if so should there be a strategy to address that?
Note that (a) isn't clear — I haven't seen significant Lean adoption in verification, they haven't migrated to Lean 4, Lean 3 seems much slower, etc etc
The only clear thing to me is that many mathematicians got interested in that tool, and I believe the snowball effect played a major role. I don't think most of them did really scrutiny other systems. People doing math in Coq (for a long time) are clearly not so happy all this new blood went there, but at the same time I'm not so sure that having a [@derive]
or a nicer UI and doc would have changed this in a radical way.
unless Coq devs get defunded across France in a big way for some reason, I think things will keep going as usual. We have releases rolling out, and Platform projects ticking along. Is growth a requirement? I'd like to see more new faces/hands working on stuff in Coq-community, for example, but we tick along there too.
Inria is currently very supportive of the development of Coq.
I thought Andrew Appel's recent presentation/paper on the Coq ecosystem was also optimistic: https://www.cs.princeton.edu/~appel/papers/ecosystem.pdf
I've been overly negative, but if Coq is hamstrung by backward compatibility, I wonder if or when it might be time for "Coq 9". @Arthur Charguéraud's recent talk had also some strategic suggestions.
or even if it's not Coq 9, larger incremental breaking changes: there is only so much bug-to-bug compatibility that a project can sustain, and current and new users will keep suffering those bugs.
@Paolo Giarrusso how do you see the contribution of the iris group to the coq-community?
Is it working as one would hope. I believe the main libraries are in platform, but I could be wrong.
@Bas Spitters I think we should keep Coq-community (the GitHub organization) and the Platform and the general Coq community (no hyphen) separate conceptually. The Platform shares people with Coq-community, but the goals are very different. Most projects in the Coq Platform are not in Coq-community.
And there's also a distinction between the Coq community and coq-community: Bas did you mean the latter?
Those libraries are in the platform indeed... But can you elaborate on the rest of the question?
from my point of view, the Platform-to-Iris connection seems to be working well (we get releases of stdpp and Iris itself for every Platform release so far).
It would be nice with more Iris-related projects joining the Platform or at least having regular releases.
as for Iris and Coq-community (with hyphen), I think the connection is very weak (the only one I can think of is that Ralf maintains autosubst1 in Coq-community), but I think that's not really an issue for either party
Thanks @Karl Palmskog , yes that's what I meant.
@Bas Spitters as one example Aneris (and Actris) would be most welcome in the Coq opam archive, and possibly also in the Coq Platform
For that we can tag @Amin Timany and @Simon Gregersen :-)
Last updated: May 31 2023 at 09:01 UTC