Stream: Coq devs & plugin devs

Topic: Proof checking performance


view this post on Zulip Karl Palmskog (May 10 2020 at 17:13):

@Jason Gross I saw your post on performance optimization/measurements for proof assistants on Coq-Club. We summarize a lot of work on parallelization/selection for proof assistants in our regression proving papers (ASE 17 & ISSTA 18): http://users.ece.utexas.edu/~gligoric/papers/CelikETAL17iCoq.pdf http://users.ece.utexas.edu/~gligoric/papers/PalmskogETAL18piCoq.pdf

view this post on Zulip Karl Palmskog (May 10 2020 at 17:14):

in particular, the ISSTA paper uses a particular methodology for measuring proof checking performance in real-world projects, adapted from regression testing in SE

view this post on Zulip Jason Gross (May 11 2020 at 03:03):

@Karl Palmskog Thanks! I'll take a look at those

view this post on Zulip Karl Palmskog (May 11 2020 at 04:16):

the Coq mutation analysis paper (ASE '19, http://users.ece.utexas.edu/~gligoric/papers/CelikETAL19mCoq.pdf) also looks at proof-checking performance, but in the context of automatically making small changes to Coq syntax and checking terms that might be affected, so a more artificial scenario - and serialization via SerAPI is part of process (however, in my experience Jane Street already optimized OCaml-to-sexp a lot)

view this post on Zulip Karl Palmskog (May 11 2020 at 04:42):

(these publications were the core of Ahmet Celik's 2019 PhD thesis at UT Austin, https://ahmet-celik.github.io/papers/CELIK-DISSERTATION-2019.pdf)


Last updated: Oct 16 2021 at 07:02 UTC