Any speculation about what this might mean for ITPs and indirectly for Coq?

I'd just say it's a great thing for Lean. I believe AWS can provide resources for Lean to succeed, and eventually the Lean community won't be dominated by math researchers, which is the primary reason why I left

From what I learned AWS has been advocating heavily for TLA, I guess they'd spend a decent amount of resources to fund Lean development work

What can we learn from when they hired John Harrison?

not sure about Harrison specifically, but recall that they also have more general verification/reasoning people such as Rustan Leino and Byron Cook. There may be verification going on in the AWS backend that is not publicized.

AWS also develops Dafny. I went to a talk the other day where Michael Hicks presented recent work on a domain-specific language for managing security policies, Cedar. https://www.cedarpolicy.com/

Oh, so Rustan Leino's Dafny followed the same path, from Microsoft Research to AWS

Of course, this doesn't let us make strong prediction, but surely it's a data point

Huỳnh Trần Khanh said:

I'd just say it's a great thing for Lean. I believe AWS can provide resources for Lean to succeed, and eventually the Lean community won't be dominated by math researchers, which is the primary reason why I left

From what I learned AWS has been advocating heavily for TLA, I guess they'd spend a decent amount of resources to fund Lean development work

As a mathematician I think that Coq can hopefully be a tool which is useful to people across multiple disciplines, I hope the intended read of this is not "Thank goodness the development of Coq is not shaped by what would be useful for mathematics"

If this can be of reassurance, the Coq developers care deeply about all users and use cases (pure mathematics, verification, and teaching) even if we are also aware that our community is made of many more computer scientists than mathematicians (confirmed by the recent survey). I also personally believe that to succeed in the long-run, a general-purpose proof assistant will need the presence of both populations.

@Théo Zimmermann Yes, my comment was an overreaction for sure, the math-comp library and the formalization of the four color theorem are of course more than enough evidence that the Coq team sincerely cares about both use cases.

Realistically what was going through my head was: I don't want Coq to get left in the dust by Lean wrt mathematics, which would happen if mathematics is not a priority. Twice in the past few months I have had conversations with people not knowledgeable about formal methods, and when I told them I am formalizing mathematics in Coq, they responded "Wouldn't Lean be better for that?" Of course I have no explanation for this, as I can't think of any high profile and charismatic individuals repeatedly making strong claims about Lean's superiority over all other theorem provers, but nevertheless it worries me somewhat haha.

my impression is the lean community is way too focused on pure math research and folks not doing pure math are driven away. the coq community is more diverse and calmer

i made a PR to mathlib and it only got merged months after i left

TBH there are PRs on Coq stdlib which take a while before there's any response too

Yeah this seems to be a feature of any sufficiently thin community. You have projects with a single caretaker who only comes back to check on it once in a while, academic abandonware that gets dumped when authors move onto the next paper, etc.

I would call neither Lean's mathlib nor Coq's stdlib abandonware though :sweat_smile: :big_smile:

AFAICT Lean and Coq's underlying "core language" is *very* similar. Obviously, "irrelevant details" end up having an enormous impact on style, on what is idiomatic, etc... but WIBNI we had a tool that takes a library from one and "ports it" to the other?

@Stefan Monnier the Lean3-to-Coq library translation has been done: https://coq.discourse.group/t/alpha-announcement-coq-is-a-lean-typechecker/581

IIRC the implementation of the translation had some issues but at least in principle Lean 3 -> Coq is possible

Lean 4 -> Coq is harder because Lean 4 added recursive primitive records, Lean 4 primitive nat optimization also presents difficulties

Coq -> Lean requires dealing with universe cumulativity somehow (Coq has it, Lean doesn't)

For Coq -> Lean, there is also the issue of nested fixpoints. No idea how to translate them. (Is that even possible in theory?)

Even if you can translate the core language, usable *libraries* require translating between elaborators which isn't a solved problem. OTOH hard theorems are another matter

@Patrick Nicodemus if people ask about Lean, my joke (and not only) would be that you're waiting for Lean 42 to finally be stable

Also https://leanprover.github.io/lean4/doc/faq.html#should-i-use-lean reads to me as "no"

to be fair, the author claimed he plans to stop rewriting Lean

wait, you mean we have to see the surface code in the system we translate to? Why wouldn't just terms suffice?

for many applications, we just care about the existence of a proof term. I personally wouldn't care much about provenance/context if some convenient constant just showed up in my database

If you want to translate the 4-color or Feit-Thompson theorems, sure, I'm just mentioning that translating math-comp (or mathlib) is a different matter

Stefan asked about translating libraries, and it's possible they actually meant it.

Of course this isn't a new problem, it's just FFI but with elaboration in the way; I'm not even saying it's hopeless.

I don't think you'd translate proof scripts or surface language terms, given that the corresponding core language terms are perfectly usable. What you'd be missing is name management, notations, auto hints, and basically any other information that's explicitly there to drive the elaborator.

Why do you need information to drive the elaborator when you can just export the fully elaborated terms?

If you're going to be executing the proof scripts to extract the proof terms for translating, running the elaborator too seems like fair game.

Oh but I guess you're saying that you can export the proof terms for the library, but the library also has hints to elaborate *user* code that are needed.

Yes, I should have been clearer.

I should have been clearer too. I wouldn't want to consume a translation of iris proof terms from lean, and lots of math-comp seems similar

There is much more to a library than just theorems/proofs terms: there are all sorts of tactics/hints/notations/..., but also and importantly *definitions*. It's already hard enough to make different libraries in Coq to communicate because they wildly differ in how they define basic notions and set their proofs up, things would get an order of magnitude more messy trying to do this between different proofs assistants. Especially since the technology of the proof assistant has a strong impact on how you set definitions up…

again, if all you care about is the existence of a proof, some minimal translation functions/lemmas suffice. There was a case study on this for Stdlib-MathComp for program verification (VST/CompCert) in a recent paper by Appel et al., I believe

I think one of the biggest surprises for mathematicians using proof asisstants is how much software engineering matters when writing proofs. You can of course do everything in a single file, but if you are serious about proofs and reuseability you need to set up some infrastructure which pulls in a lot of ideas from software development.

Last updated: Jul 24 2024 at 11:01 UTC