@Jason Gross I saw your post on performance optimization/measurements for proof assistants on Coq-Club. We summarize a lot of work on parallelization/selection for proof assistants in our regression proving papers (ASE 17 & ISSTA 18): http://users.ece.utexas.edu/~gligoric/papers/CelikETAL17iCoq.pdf http://users.ece.utexas.edu/~gligoric/papers/PalmskogETAL18piCoq.pdf
in particular, the ISSTA paper uses a particular methodology for measuring proof checking performance in real-world projects, adapted from regression testing in SE
@Karl Palmskog Thanks! I'll take a look at those
the Coq mutation analysis paper (ASE '19, http://users.ece.utexas.edu/~gligoric/papers/CelikETAL19mCoq.pdf) also looks at proof-checking performance, but in the context of automatically making small changes to Coq syntax and checking terms that might be affected, so a more artificial scenario - and serialization via SerAPI is part of process (however, in my experience Jane Street already optimized OCaml-to-sexp a lot)
(these publications were the core of Ahmet Celik's 2019 PhD thesis at UT Austin, https://ahmet-celik.github.io/papers/CELIK-DISSERTATION-2019.pdf)
Last updated: Oct 13 2024 at 01:02 UTC