@Karl Palmskog writes:
CoqHammer was one of the most challenging projects I've ever dunerized, and there the main developer prefers coq_makefile for local development going forward, so Dune maintenance will be challenging
What was exactly challenging about it? From a quick look I see nothing strange. Note that I think that recent Dune directly understand C and C++ executables, maybe you could take advantage of that? Also, it may be more convenient to actually declare .vo
files as a test target instead of manually calling Coq.
as you might have seen, I had to preserve the coq_makefile build, which used .mllib
But you switched to mlpack right?
the main challenge was in making the coq_makefile build and the Dune build completely equivalent
yes, in the end it seems to work now with mlpack
I see, indeed keeping both build systems is 4x the work, not sure I'd recommend that
the maintainer has a fairly conservative style, so if I suggest a replacement, that PR was not likely to get merged
I still don't know how to specify .vo
targets that should be invokable as if they were tests, i.e., something that can be triggered at the opam level with the -t
option
CoqHammer has "smoke tests" that should be run but whose output should be completely ignored
finally, "challenging going forward" means that if some change to Makefile
is done, one has to conjure up an equivalent in the Dune world
Yeah, experience has shown that it is not a good idea to keep two build systems
@Emilio Jesús Gallego Arias but I think we have to choose between the following bad options:
using coq 1.0
) is out (this leaves a huge "backlog")using coq 0.2
and keep coq_makefile around until 2.8 ("two build systems")to me, option 2 still seems preferable.
Does 2.8 fix other issues like IDE integration?
do we even have a definition of what a fully "fixed integration" means? To me at least, we need some way to have all of PG/VsCoq/CoqIDE automatically read Dune-built .vo
files as indicated in _CoqProject
, and for the same _CoqProject
to work both with Dune and coq_makefile without changes
many people stumble into proper installs of Coq itself, which at least gives you coq_makefile. If we see projects switch to Dune and suddenly drop IDE support when building with coq_makefile, this would be bad for the ecosystem (everyone and their brother has to figure out how to install Dune and use dune build
/dune build -p <name>
)
for example, adding Dune across Linux/Mac/Win is seemingly not going to happen for the 8.12 platform: https://github.com/coq/platform/issues/11
for the same _CoqProject to work both with Dune and coq_makefile without changes
The plan I keep hearing is to replace _CoqProject
by a better format.
I can guess sensible reasons, but I assume some backward compatibility
I have heard this as well, but I think I know the Coq ecosystem well enough now to estimate that a switch to something new instead of _CoqProject
might take years
Oh sure
suddenly drop IDE support when building with coq_makefile
Coq's not Apple, but I'm sure I misunderstand :-)
still, _adding_ support for a _CoqProject
alternative, without regressions, makes sense to me.
well, I mean "IDE support" in the sense of: you open emacs
/code
/coqide
and it just works. Perhaps projects will be IDE+Dune by default, but there is some small change you can do to have IDEs+coq_makefile suddenly work again. This would still be bad.
I'd expect coq_makefile
to stay the default.
in any case, Coq has sane deprecation policies.
and an extensive CI.
the last few comments in the following issue are unfortunately not encouraging w.r.t. reaching ideal IDE support: https://github.com/ProofGeneral/PG/issues/477
I do expect the "deprecation period" will need to be longer than usual — but before that discussion, the alternatives need to exist.
(taking a look at the issue)
personally, I find it depressing that PG developers want to keep their special Coq build support inside PG. This is yet another build system in the growing collection, that we can't even run in CI in a reasonable way
looking at https://github.com/ProofGeneral/PG/issues/477#issuecomment-612879449, I don't think you can blame PG for offering a better user experience.
Yes, but there are important features missing and there is the general problem, that dune cannot deduce dependencies, before the file has been saved. I am not sure, if it is desirable to save a file when Proof General processes a require command.
in any case, nothing in that issue suggests breakage to what works today. You'll need to switch for new features, but that's a different story.
AFAICT, it's not clear how to regain today's user-experience with dune.
to me, having this kind of stuff in _CoqProject
is terrible, and there must be some sanctioned alternative:
-R _build/default/coq/ Xyz
oh we agree on that
but adding code to emacs to run dune coqtop foo.v
seems acceptable
I'm not sure how you read what in the issue. AFAICT, the open question is how to adapt today's "special Coq build support inside PG".
without it, PG might go back to the inferior UX in CoqIDE/VsCoq (and not quite)
what I would like to see would be auto-discovery in _build/...
based on what people currently write in _CoqProject
for coq_makefile
for a proper solution, find out all the features they need, and integrate proper support for them.
well — _build/default
isn't reliable
_build
can live in a parent folder, and default
can be replaced by something else
they can, but I would be fine if it just worked in most cases
I'm not sure why generating _CoqProject
doesn't work, but I assume Emilio has some reason
if we generate _CoqProject
for Dune, then it seems we suddenly have to generate it for coq_makefile as well
why? _CoqProject
's an input for coq_makefile
?
but can you say why you'd want auto-discovery?
the only "problem", if there is one, is that dune wants more info and in a more structured form. I assume that has valid reasons.
Since migration is not easy, all the existing code has to stay around for quite a while.
what I want is for projects to have the possibility to support the following out of the box, without any hacks:
okay
but then, why would you read _CoqProject
for a dune project?
AFAICT, it contains a _projection_ of the info dune has.
Dune itself doesn't have to read _CoqProject
of course not.
and neither does the editor, no?
but in effect _CoqProject
is the main signaler to CoqIDE and PG where things are found and should not have to be changed around with hacks (just because you pick one IDE or build system)
I agree the alternative shouldn't use hacks.
is the buried lede that hacks are unavoidable?
I'm not sure, but it only takes key developers being convinced that this is so [hacks are unavoidable] for it to become the case in practice
should not have to be changed around with hacks (just because you pick one IDE or build system)
do you want one standard interface between IDEs and build systems?
as a start, sure. But ideally we would have some Coq project metadata format that could not just be consumed by build systems and IDEs, but also by other tools
a standard interface seems unnecessary if the build systems are just the 2 ones, which seems everybody's working assumption...
... if you want something better, plausible approaches probably exist
our meta.yml
in coq-community is just a convention after all, only used by our templates. Some of the data there in my opinion belongs elsewhere since it's so general.
Ah. You want a generic format for (Coq) IDE project files :-).
this was reminding me of Scala's Build Server Protocol (BSP), but this might be a different problem.
my main concern is actually analysis tools
I wonder what specific analyses you have in mind, but I can totally imagine nontrivial requirements.
opam files express some information about how to build a project, but not enough to do advanced stuff. _CoqProject sometimes gives a piece of the puzzle, but it's not standardized enough
and dune files still lack some info?
well, you would have to figure out if Dune even works
I have more questions about your requirements, but I'm not the actual target audience.
I totally agree this could be a significant undertaking, and it'd probably be ideal to collect all the requirements _before_ transitioning the whole ecosystem.
I still cannot tell if the "standardized format" would have to be dune files or sth else, but I suppose that's a million $ question.
all I can say: in my industrial team we'd want both jump-to-definition and async proofs, and we have 0 IDEs supporting that.
the kicker to me is not: async proofs inside IDEs. It's intelligent use of async proofs and parallelism in CI.
didn't your paper say that make -jX
was enough CI parallelism?
make -jX
can give you a one digit speedup, but more intelligent use of .vos
/.vio
and incrementality can give two digits speedup
and async proofs seem likely to have lower throughput in CI (which is why I only use it in IDE)
I mean, we'd kill for any CI speedup, but if we managed to use dune caching we'd be more than happy.
Most of our build is embarrassingly parallel, but for good results we had to do implement composition and incrementality by hand.
I still think binary caching is not the way to go, better to persist some simple metadata
make vos
is also cool, but make vos; make vok
is slower than make
not always, but in general, sure.
curious on that too, AFAICT dune's caching sounds a bit like bazel's
that runtime was on one project, after adding all needed Proof using.
/ Set Default...
everywhere
minus on Program Obligation
s
anyway, I'll finally get https://users.ece.utexas.edu/~gligoric/papers/PalmskogETAL18piCoq.pdf on my reading list
from my recollection, you can't easily control number of workers or scheduling with make vok
I thought there was just one thread? but I've never tried multithreaded proving
but anyway, my intuition is that the incrementality itself is the largest gain once you have basic make -jX vos
or make -jX vio
on an annotated codebase
at least in VsCoq, the overhead of delegating proofs seems terrible for throughput
right, that's true!
it's only sad the vos/vok
files must be blown away by full builds :sad: :
"blown away"? you mean the combination make vos; make vok
?
anyway, for CUDW I plan to try to revive an effort to get the votour
program up to date so it can extract tons of interesting stuff from .vio
/.vos
useful for intelligent decisions after you have done your basic make -jX
(ideally, even the first make -jX
would not touch unaffected parts of the project)
re "blown away", running make
will clear any .vos/.vok
files, since they _could_ be stale.
(and leave them with size 0
)
one thing that would be exciting is to actually use opacity information to rule out that some changes cannot affect other proofs (e.g., you write a definition with Qed
, which means proofs using this definition cannot fail simply because you change the definition body)
I thought dune did that? Or does it just ignore changes to comments?
we did not do this [opacity analysis] in our papers, but it's a simple thing to do by modifying the Coq term dependency graph
Dune does not look at the term level at all currently, though Emilio has some experimental work in that direction I believe. I'm not even sure Dune uses intelligent hashing of files that ignores comments. The ideal would be to have Coq lex or parse the file and compute the hash based on the lex or parse output.
since it may be relevant, I will drop a paper where we looked specifically at Coq term-vs.-file dependency graphs, and how the incremental analysis works formally (incorporating opacity would be a small simple modification): http://users.ece.utexas.edu/~gligoric/papers/PalmskogETAL20Chip.pdf
if nothing else, it has good refs to build system research
the amazing thing in Coq, which means we should be blowing out test selection/parallelization in Java and the like out of the water, is that type checking is deterministic (and has predictable runtime, but this is just a cherry on top)
Not quite sure how type checking is less deterministic in Java... I’d also expect it to be faster in Java than Coq...
But I’ll have to read that paper first, I guess.
in Java, there is to my knowledge no way to rule out nondeterministic tests ("flaky tests"), most typically due to concurrency (but can also be due to use of external resources via network)
Ah! Okay, I get it.
flaky tests are a huge deal to, e.g., Google. It prevents you from trusting that "green" is "green".
the only near-equivalent in the Coq world we have is probably universe consistency, the checking of which can be deferred/avoided under certain circumstances, so Qed might not be full Qed
we proposed that projects that are paranoid about universe consistency could use vio2vo
as a compromise, and load the resulting .vo
files as a final step
unfortunately I don't think the same can be done for vos
Last updated: Jun 04 2023 at 23:30 UTC