As to CertiCoq, it’s currently still somewhat work in progress, but there is a Gallina -> Gallina certified extraction pipeline in MetaCoq, which does all the fancy proof erasure, removes unit-typed arguments to functions, and so on, with a proof that it preserves evaluation. So you should be able to write a program in Gallina, use that extraction phase, then CertiCoq, then CompCert, and get a fully-certified pipeline from Coq down to the metal.
But there is probably less support to add some new functions at each level going down, and still prove these correct…
I would settle for performing extraction inside Coq itself, to get "regular" functions from all my fancy sigma-type-functions which generate result+proof, but is this possible currently?
this is something that HOL4 can't do, since the logic can't support sigma types...
It’s definitely on rails! You’d have to ask @Matthieu Sozeau or @Yannick Forster for the exact state of the formalization, but if it’s not there yet it is going to be very soon.
but just so we are talking about the same thing. Let's say I have this type:
Definition accept_t (rs : r char * list char) : Type :=
{ accept_p rs }+{ ~ accept_p rs }.
where r
is a regexp and accept_p
means the given sequence is in the language defined by r
. And now I define:
Equations acc (rs : r char * list char) : accept_t rs by wf rs (* ... *) := (* ... *)
Does this mean I can get a version acc_ex
of acc
which returns only, say, Left
or Right
, and a generated proof that acc
and acc_ex
are related?
Not exactly, because most of this lives at the meta-level.
Or rather, your definition would be reified to some inhabitant of the type term
as defined in MetaCoq, and you can then run extraction on that, to get another term
in an AST that has a special box
constructor, and all the propositional content is transformed to that box
constructor.
I don’t think there is support yet to unquote that extracted term back to Coq, but up to mapping box
to unit
this should not be too hard (there’s already an unquoting for the standard AST).
OK, at the moment, I think I would settle for a version I can run that doesn't have the propositional content. The main drawback of sigma-types is arguably that they can't be efficiently computed with inside Coq when proofs are big.
But even if unquoting was there, this does not give you a proof term acc_rel
relating acc
to the unquoted erasure (your acc_ex
). This is because that proof currently lives at the meta-level.
but can't I do some trick to propagate the meta-level proof for my instance?
Karl Palmskog said:
OK, at the moment, I think I would settle for a version I can run that doesn't have the propositional content. The main drawback of sigma-types is arguably that they can't be efficiently computed with inside Coq when proofs are big.
Then I think that we are not too far from having a command MetaCoq extract acc into acc_rel.
that generates acc_rel
.
Karl Palmskog said:
but can't I do some trick to propagate the meta-level proof for my instance?
I am not completely sure, because that meta-level proof is really about evaluation, and since everything in Coq is up-to conversion, I think it would be hard to state.
I think I could live with adding the meta level proof as an axiom, if I at least knew exactly what the property looked like
If acc
and acc_rel
were both booleans, then the property would say that this must be the same boolean
Same goes for any hereditarily inductive type
If you get to functions, things get hairier, I guess some fashion of parametricity enters the game
But I guess in your case you would probably get something like "the two functions take equal inputs to either both left or both right".
all the functions I'm interested in to extract inside Coq either return sumbool
or sig
. For sumbools I guess one just wants a version that instead returns sum
or bool
, and for sig
it should be obvious, just throw out the exist
I am not completely sure how this would fare if you have also sig
as inputs, but yeah, that should be the spec
And as to generating the proof, now that I think about it I am not even completely sure how this would do, because those really are on the syntax of your terms, something that does not even make sense in Coq (again, everything you can express is up to conversion)
Plus, there’s 100k lines of Coq proving confluence, subject reduction and whatnot before you can get to the proof you are into. Not sure how to account for that
(deleted)
for some context, there was the advice here to never use sigma types, since they don't compute well: https://www.cs.princeton.edu/~appel/papers/ecosystem.pdf
However, I think MetaCoq could rehabilitate them a bit, but I guess meta questions abound.
I spun off the topic a bit so as to hopefully avoid some confusion
Yes, I think once the proofs have matured a bit we can think about what we want to do with them, and indeed think about all the meta-stuff…
As for sigma-types, maybe extraction inside of Coq is a solution, but there might be others: a good notion of opaqueness would probably help, and maybe replacing Prop
by strict propositions might also be helpful to ensure proofs are really irrelevant and can be safely ignored
naive question: is conversion really such a big problem? Maybe it complicates stating what we want, but I’m not even sure how…
I don’t know, but you can’t really formulate in Coq "this term t
reduces to a t'
such that…"
Ofc you can replace that by propositional equality, but t = t'
is much weaker
exactly — probably something like erase_result (original_function inputs) = erased_function (erase_inputs inputs)
probably a logical relation is needed, so that “=“ becomes “maps equal inputs to equal outputs” etc as appropriate
yes, for many use cases "extensional equivalence" is enough to get work done
Yeah, maybe that would work, plus the knowledge that by canonicity if b = true
then b
actually evaluates to true
and “propositional equality is weaker” seems only half-true. Internally, you cannot even state anything that you’d lose. Externally, I’d think the generic metatheory lets you recover everything you might want.
Yep, fair enough
Maybe I’m too pessimistic :)
yeah that use of canonicity is an example, there could be more.
So maybe we can hope to have MetaCoq Extract foo to foo_ex related by foo_rel.
with foo_rel
mirroring the result of the meta-proof!
That would be really neat indeed
Meven Lennon-Bertrand said:
As for sigma-types, maybe extraction inside of Coq is a solution, but there might be others: a good notion of opaqueness would probably help, and maybe replacing
Prop
by strict propositions might also be helpful to ensure proofs are really irrelevant and can be safely ignored
Ok, these are exactly the two solutions proposed in the paper :grinning:
Just to take apart my argument that won’t cover, e.g., execution traces or computational complexity (but you can’t just talk about those internally); does this cover normalization on open inputs?
let's be real: regular Coq users are not going to jump into using SProp
anytime soon, especially with lacking library support
I don't know if opaqueness related stuff are also going to change anytime soon for Coq itself?
SProp
is great, but MetaCoq Extract foo to foo_ex related by foo_rel.
would apply to millions of lines of Coq already written
Paolo Giarrusso said:
does this cover normalization on open inputs?
You mean, the current proof in MetaCoq?
I meant more like: if you implement the command (possibly via the current proof in MetaCoq), it could be run in practice on a ton of code
in contrast to using SProp, which appears to require structural changes (compared to regular Prop
idioms) to apply well
For sure, we are very far from a good integration of SProp
on so many levels… Even before adapting libraries, already the higher layers of Coq are completely broken on it currently
this command would also let you compute past irrelevant Prop axioms etc? We had discussed here how to do this with PMP, but extraction would be more powerful
re SProp, the only attempt to use it I’ve seen close got reverted for performance regressions (https://gitlab.mpi-sws.org/iris/stdpp/-/merge_requests/322)
Paolo Giarrusso said:
this command would also let you compute past irrelevant Prop axioms etc? We had discussed here how to do this with PMP, but extraction would be more powerful
Now that you say it, I’m not sure exactly what would be the computational behaviour of such a thing
In particular, erasure might get rid of proofs of termination, and so the extracted term might be non-terminating
But you did say Gallina -> Gallina?
So I might have been too eager when saying you could unquote back by just mapping box
to unit
…
Actually, it takes a Gallina term to a language with one extra primitive (box
), representing all that’s been erased
So Acc erases to box, but operations on box are axiomatized?
Actually, I don’t know exactly what happens
The Ast is here
Either way, it makes some sense; for Gallina to Gallina, you might not be able to erase subsingletons (at least Acc) as you’d do when erasing to non-Gallina…
Paolo Giarrusso said:
re SProp, the only attempt to use it I’ve seen close got reverted for performance regressions (https://gitlab.mpi-sws.org/iris/stdpp/-/merge_requests/322)
did the perf issue ever get investigated?
Yes, I’m not sure exactly how elimination of former Prop
inductives (eg equality and acc
) is treated, but there’s probably something hidden there
Like, we know the extracted term should be terminating because it is related to the original one that we know to be terminating, but all information as to why the term is indeed terminating is erased, so probably that erased term would not be typeable in Coq without some hackery
also dunno how it’s used, but that reduction relation is not Gallina’s :-(
Yes, it’s weak-call-by-value
And indeed a hard part of the proof is to relate that strategy to iterated weak-head normalization
that’s legal for gallina, but my links suggest it does reduce through box instead of getting stuck.
Yep, I just looked at it and it’s definitely not something you could get in vanilla Gallina
I over-sold this, sorry about it :grimacing:
even getting lots of boxes would be an improvement compared to waiting minutes just to compute some Prop stuff
Probably the best you could get is something like MetaCoq erase_and_compute foo.
alternatively, not easy, but could one plug in some partiality monad to fix the termination problem?
Mmmmmmm… I’m not sure: if you really want the extracted term back to Gallina, you’d need some termination witness. But if you’re getting rid of acc
, that’s exactly what you erased in the first place?
I’m not an expert of the termination fiddling, so maybe there’s a trick there
I mean, Partial Int
would represent a monadic computation that gives an Int or diverges — because of infinite loops, or because some cast failed.
if you obtain it by erasing an Int
expression, you have a metalevel proof that your Partial Int
always succeeds, without a way to internalize it. But it might be appropriate sometimes?
No I don’t have a good use-case — and I don’t know a generic translation to this, I only have partial ideas for eq
and Acc
.
@Gaëtan Gilbert not AFAICT
among other signs, the tracking issue is https://gitlab.mpi-sws.org/iris/stdpp/-/issues/85 and has nothing.
There’s another issue: take a function foo (b : {b' : bool | b' = true} : unit
which uses the proof of equality for termination. Then you’d erase this to some foo' : bool -> unit
, but now you can feed it false
and get a diverging computation, because this is not in the fragment related to the original function
So not just casts: any erased equality would have to become a runtime test :-(
I think you’d see this the moment you try to eliminate your equality, which you could replace with a cast?
Yep
(But in the sense of a cast that can fail, and you’re getting pretty close to gradual typing)
Anyway, at best, erased_function (erase_inputs inputs) = return (erase_result (original_function inputs))
could still hold in principle, but it’d take a much more complex translation if it exists, so I’ll stop speculating. EraseAndCompute seems nicer.
Which is yet another can of worms
Wait, which can of worms? EraseAndCompute?? Oh, must be gradual typing...
Meven Lennon-Bertrand has marked this topic as resolved.
Meven Lennon-Bertrand has marked this topic as unresolved.
Paolo Giarrusso said:
Anyway, at best,
erased_function (erase_inputs inputs) = return (erase_result (original_function inputs))
could still hold in principle, but it’d take a much more complex translation if it exists, so I’ll stop speculating. EraseAndCompute seems nicer.
On (closed) terms of first-order inductive type (i.e no proofs or types should appear in the return value), you do get a meta-proof of that, but indeed it's talking about erased terms which you can't really manipulate. I think a better interface for using MetaCoq's erasure would be to have a new conversion test (certicoq_compute) that checks that the conversion problem falls into the precondition of erasure (closed, first-order inductive value) and calls erasure+evaluation to get to a constructor on both sides of the conversion problem and check that they are equal.
Last updated: Sep 23 2023 at 08:01 UTC