@Pierre-Marie Pédrot I am opening this thread so we can discuss about the environments at close proof time, and what to do with side effects, etc... IMHO it is going to be more efficient to chat here than in github, as there is quite a bit of mess.
So as of today, if I understand correctly, there are two codepaths followed by different clients in terms of the env: simple compiler and coq-lsp etc... do reuse the env as it is as the end of the proof. This is done for two reasons: first to avoid re-doing all the work of constants that are there, second as commitment to soundness of the vernac API in the sense that a loop executing commands should produce correct results without requiring hiding invariants as long as Vernacstate.t
is threaded correctly.
The stm does use the initial environment, but then declare has some hacks to re-add things that were not there, and indeed quite a few other tidbits.
So, I think I can help enforcing the correct passing of envs by typing, but first question, is what are the possible cases? What are the invariants that we need to enforce in the kernel?
From a purely formal perspective, scoqc doesn't break any invariant of the kernel currently, it simply behaves differently than coqc
(Assuming the safe_env and the env are synchronized, I didn't check but I hope it is the case.)
In practice, it means that scoqc always treats side-effects as if they were part of a Defined proof
This is somewhat wrong from the user perspective though, even if the generated vo is fine.
I don't know what the STM does to the poor initial environment of the proof, but from my kernel POV, it should be the same at the entrance of the proof and right before the Qed
Or, if you prefer, before calling the kernel the Qed should restore the initial proof env
This is a more correct way to phrase this.
Then the upper layers may export a few side-effects if they want so.
Thanks; umm, I'm still seeing some mess here, give me an extra day to investigate more before we push the fix.
Pierre-Marie Pédrot said:
I don't know what the STM does to the poor initial environment of the proof, but from my kernel POV, it should be the same at the entrance of the proof and right before the Qed
This is what the STM tries to enforce (for Qed proofs) but it is not what happens for Defined proofs or what used to happen before the STM.
If the proof contains, say, abstract
or any other side effect, then pre-STM the env at Qed time was the one with the side effects (that is not the environment just after the proof starts). The model was really "linear".
In order to preserve beloved retro compatibility of Defined + abstract, IIRC, the STM may use the env just before the Defined.
In order to preserve the "caching effect" of abstract exact_no_check
in a Qed proof, the STM use the kernel "backdoor" to not re-typecheck the initial spine of lambda/letins in the proof which corresponds to the already typechecked and inlined side effects.
In the "regular" Qed case, the STM should play the Qed on the env just after the proof begins.
@Enrico Tassi that's not what the code is doing now, you have to explicitely reexport the side-effects in Defined mode
Outside of the kernel, I mean.
Last updated: Sep 09 2024 at 05:02 UTC