I tried another attempt to do a Coq Platform release - today GitHub appears to be extremely slow - downloading opam fails about half of the time and I definitely don't get much further without timeouts? Do others have issues as well?
(Maybe I should have kept my Freudian typo - tired instead of tried ...)
Seems to be fixed again - bets if I can succeed today are still open ;-)
yeah I have no problems with anything GitHub today
what is the success condition for a Platform release? Didn't CI for main
a couple of days ago finish just fine?
@Karl Palmskog : the CI test only a subset of the full release cause of the 6 h limit. The actual installers (complete extended) I build locally - and this has to work as well - which can be a bit of effort because it is not tested in CI.
@Michael Soegtrop Would it make sense to split the job into multiple GH jobs that depend on each other's artifacts? Either linearly (each one just picks up on opam install
of the next chunk of packages) or in parallel (merging artifacts from multiple opam install
s)?
@Jason Gross : possibly, but I think it makes more sense to simply rent a dedicated server for this. On Ubuntu I am already building quite a few variants (older Coq versions) - when I split it up, I guess I would end up with 200..400 jobs. Since might also take literally forever to get through. On my M1Pro Macbook I can build the extended pick in less than one hour and on my work station in 1/2 hour.
as we've talked about before, would be nice with a benchmark mode to just see how long extended pick takes on certain CPUs/memory/etc. I personally believe the *X3D AMD CPUs could be a nice basis for proof engineer workstations (due to larger on-die cache and Coq being memory-bound), but a baseline is needed for comparing
in fact, would be a nice community service to publish "Coq Platform scores" regularly for some common machine configurations
Since Jason's surprised by the performance gap, probably worth noting GitHub runners go back to even Haswell processors; that architecture comes from 2013, and usually you only get 2 cores? (I had noticed such stone age processors in CI logs). On https://www.cpubenchmark.net/cpu_list.php, an M1Pro seems only 50% faster of that specific model, but top workstations get up to 5x faster.
Github docs:
https://docs.github.com/en/actions/using-github-hosted-runners/about-github-hosted-runners#supported-runners-and-hardware-resources -> https://learn.microsoft.com/en-gb/azure/virtual-machines/dv2-dsv2-series#dsv2-series
Sorry: 5x faster than the full processor — since a runner gets 2 vCPUs out of 12 cores/24 threads, presumably you're getting 1/6th or 1/12th of the processor power. You must multiply both speedups — this is very rough but it suggests the "dedicated server" could be more than 10x faster.
@Karl Palmskog : indeed it would make sense to record build times and settings and provide a script to upload this with some machine specs. But my gut feeling is that for interactive work the M1 Pro / M2 Macs are currently the best compromise. Sure my custom built 32 core / 64 thread thread ripper with really massive cooling is faster, but in the end not that much unless the proof is designed for massive parallelisation. Even a parallel Coq platform build runs in the end on 4 threads or so only, because only a few packages remain in the end and run almost sequentially.
I don't think we know currently what "typical" interactive Coq use looks like. There was some data capture done here from interactive Coq sessions, but to my knowledge, it was never aggregated to look at performance requirements: https://github.com/uwplse/analytics-data
Last updated: Oct 08 2024 at 14:01 UTC