Just came across this link: https://arxiv.org/abs/2009.03393. The abstract looks impressive. Any opinions on this?
We explore the application of transformer-based language models to automated theorem proving. This work is motivated by the possibility that a major limitation of automated theorem provers compared to humans -- the generation of original mathematical terms -- might be addressable via generation from language models. We present an automated prover and proof assistant, GPT-f, for the Metamath formalization language, and analyze its performance. GPT-f found new short proofs that were accepted into the main Metamath library, which is to our knowledge, the first time a deep-learning based system has contributed proofs that were adopted by a formal mathematics community.
already some discussion here:: https://leanprover-community.github.io/archive/stream/219941-Machine-Learning-for-Theorem-Proving/topic/GPT-f.20paper.html
Looks like there is a need to create a cross-system chat for provers and machine learning.
might be better to have general learning-for-proving discussions on the Lean Zulip, and limit ours to Coq-specific stufff. I don't think reasonable cross-posting solutions will appear in the near future.
It shouldn't be too hard to build a Zulip to Zulip bot limited to a specific stream on two instances (but as of today I don't think it exists).
they do have some plugin for relaying messages, but what everyone is usually looking for is a completely shared stream, and I don't think the backend design allows this easily. I think message relaying through some bot is terrible and would rather just keep discussions separate.
Last updated: Oct 13 2024 at 01:02 UTC