@Pierre-Marie Pédrot @Enrico Tassi , I think @Gaëtan Gilbert has a good point in that refine is actually doing stuff that it has no business to do. In fact confirm that if we remove the push_private_parts
from refine.ml , Coq compiles, and the test-suite passes. Also, my assert is never triggered.
This makes sense, as the functions creating side-effects are actually trying hard [to the best of their possibilites] to keep things in check when they create the side-eff, and IMHO that's the right approach. PR incoming
Yes but no.
You need to wrap the non-tactic function so as to replay the side-effects outside of it.
That is, generic_refine shouldn't be handling the environment reset, I agree
but unfortunately, Refine.refine would be unsound w.r.t. that
because side-effects also appear in the evarmap
what non tactic function
the thunk from Refine.refine
val refine : typecheck:bool -> (Evd.evar_map -> Evd.evar_map * EConstr.t) -> unit tactic
you mean the
let lift c =
Proofview.tclEVARMAP >>= fun sigma ->
Proofview.V82.wrap_exceptions begin fun () ->
let (sigma, c) = c sigma in
Proofview.Unsafe.tclEVARS sigma >>= fun () ->
Proofview.tclUNIT c
end
should do the side effect dance?
yes
We have way too many states in Coq.
ping @Emilio Jesús Gallego Arias
I agree, however we are not so far from taming it a bit; down to proof state and vernacular state
How about when you rewrite everything in Rust?
in metacoq
So Lean 4 :)
@Cody Roux you're joking but I seriously plan on writing some critical parts in Rust.
Like, universes.
(Am waiting for a proper OCaml-Rust FFI though, there is ongoing work on that it seems.)
Exciting!
Time to drop dune and use cargo [trolling]
or bazel?
Sounds like the name of a dutch sandwich, but looks amazing (some pun intended) ;-)
@Cody Roux a PhD student at the MPI went pretty far on the implementation of a Coq checker in Rust by the way.
https://github.com/pythonesque/kravanenn
Hm, sorry, not the right repo.
I was wondering about this based on Jason's defense and some comments from Talia Ringer: how would one implement the kernel if the only consideration was fast proof checking and proof engines?
Aha. That's the trick: proof checking and proof engine are pretty much at odds.
How so?
They are twins PMP! How can they be at odds?
also: Should I create a new thread, or are we still in-bounds for "scary stuff"?
Evil twins, maybe.
But except if you want to duplicate the code, you need to optimize different things in the two cases.
Oh, that's definitely scary.
It's a bit late for Halloween, but maybe a Red Xmas is thematic.
with my crazy logic programming glasses, the proof engine is the kernel plus a bunch of rules of evars. If one gets faster, how is it's extension getting slower?
Jason mentioned that instantiating Evars
was a notable performance issue. In my experience, trying to do partial evaluation using simpl
is a non-starter. And then obviously a bunch of Coq libraries are crazy slow to compile for various reasons
I am afraid that if we speak efficiency, logic programming is already dismissed.
Being serious, mind you, not even trolling (well).
How about the primitives for logic programing? Like Evar instantiation, or unification?
In matita the engine was a copy-past of the kernel, where assert false
for the Evar
branch was actually doing something. No logic programming there.
Yes, simpl is a no-starter.
So, on the one hand you want efficient evars in the proof engine.
But vm_compute
can't make things opaque! We ended up adding axioms in places where things needed to not be unfolded...
The best you can do is probably some mutable in-place structure for that.
Is that how it is in Coq? I missed Emilio's talk yesterday.
vm_compute
could perfectly make things opaque, I have a series of PR on the subject.
No, Coq uses a pure structure that is essentially a heap.
In the crazy world of logic programming, i.e. Elpi, using mutability + a trail for "evar" assignment was 10x faster than using a purely functional map.
Without GC, without fast access, with a terrible memory representation.
how about "opaque, but inside this subterm, compute again"? (Speaking of scary stuff)
@Cody Roux this is starting to get spooky
@Enrico Tassi well, this is not directly related to logic programming.
Basically we tried to implement this: https://arxiv.org/abs/1612.06668 using vanilla Coq reduction
mutability + trail is what I see on "typewriter" papers about Prolog.
Maybe we don't call the same thing LP which is why we disagree on its potential efficiency. To me, the defining feature of LP is a baked-in non-deterministic semantics.
In Lean 1, higher-order unification could backtrack. This was a disaster for usability, and Lean 2 just failed if there were remaining flex/flex problems...
Ah yeah, I've seen this paper already;
Leo's idea for Evar contexts seemed clever though: just close all evars with lambdas...
As a side-effect (pardon the pun) LP uses unification + backtrack for pretty much everything and the little game is to put a lot of cuts everywhere in order to counter this nonsense
@Pierre-Marie Pédrot Even in LP bactracking is not super useful. 10% of the times you use it, 90% you "turn it off!". Still many things are nice.
Last updated: Jun 11 2023 at 00:30 UTC