I am trying to wrap my head around the implementation of (co)fixpoints in the VM and native, because the machines do not compute the same normal forms.
(Background: trying to share the reification of both machines)
It turns out that the VM has a dedicate node Vfix for non-neutral fixpoints, while the native has not.
As a result, native compilation always eta-expands a fixpoint, but the VM doesn't.
Fixpoint dummy (n : nat) := match n with 0 => tt | S n => dummy n end.
Eval native_compute in dummy.
(*
= fun n : nat =>
(fix Ffix (n0 : nat) : unit := match n0 with
| 0 => tt
| S x => Ffix x
end) n
: nat -> unit
*)
Eval vm_compute in dummy.
(*
= fix Ffix (x : nat) : unit := match x with
| 0 => tt
| S x0 => Ffix x0
end
: nat -> unit
*)
I understand that the VM was implemented at a point in time where eta-expansion was not allowed by the kernel.
Since we are past this point, shouldn't we make the VM and native behave the same and always eta-expand?
The way we handle Vfix in the VM is not pretty, and I wouldn't be surprised if we could exploit it with weird fixpoints.
Shouldn't it be to always eta-reduce instead? Currently, vm_compute
and compute
agree. It would be best that native_compute
would agree with compute
too rather than the opposite.
Eta-reduction is not always valid, though.
(mumble mumble cumulativity on domains of arrow is invariant)
Your off-the-shelf NbE always eta-expand functions, BTW.
Sure. But, at the very least, not to do spurious eta-expansions.
How do you know that without retyping?
But for fixpoints, you should not have to do any retyping. If you are strongly reducing a non-reducing fixpoint (i.e., just substituting the up-variables), then you know that the n first eta-expansions (with n the position of the main argument) are spurious.
I'm pretty sure this is not true, let me find a counter example.
(I just discovered that Coq has the brilliant idea to eta-expand in pretyping when it detects a cumulativity issue. This is troubling to say the least.)
Yeah I remember discovering this too.
It took me at least one minute to find out why and refrain from looking for an inconsistency
I found a fake proof of false like this once.
so, the example breaking @Guillaume Melquiond claim:
Universes i j.
Constraint i < j.
Axiom K : Type@{j} -> unit.
Fixpoint F (n : nat) (A : Type@{j}) := match n with
| 0 => K A
| S n => F n A
end.
Check (F : nat -> Type@{i} -> unit). (* Actually gets eta-expanded *)
Check (F : nat -> Type@{j} -> unit).
Check (fun (n : nat) (A : Type@{i}) => F n A).
just a variant of the usual expansion trick
In the end, it was not a paradox, just the definition wasn't what I had expected once gone through pretyping.
Pollack inconsistency at its finest
so anyways: assuming we merge Vfix into Vfun, now we get an expanded fixpoint value in the VM
we cannot decide to reduce because it would make the term ill-typed
so at the very least we need a conversion check with the argument types of the fixpoint
this looks expensive to me
so don't merge vfix
(Relatedly, now that I have some eyes, anybody looking at https://github.com/coq/coq/pull/16651 ? That's a soundness issue)
@Gaëtan Gilbert this is a hack to support a broken eta, and it makes native different from VM
I think that what NbE taught me is that functions are functions, and that's it.
We shouldn't have a different class of values for some function types
also the code is making some hypotheses about the shape of closures
value coq_kind_of_closure(value v) {
opcode_t * c;
int is_app = 0;
c = Code_val(v);
if (Is_instruction(c, GRAB)) return Val_int(0);
if (Is_instruction(c, RESTART)) {is_app = 1; c++;}
if (Is_instruction(c, GRABREC)) return Val_int(1+is_app);
if (Is_instruction(c, MAKEACCU)) return Val_int(3);
return Val_int(0);
}
can't we break that with, say, primitive functions?
I must be missing something, because I do not understand why it breaks my claim.My claim is that the machines do not have to eta-expand F
to normalize it. The fact that the pretyper eta-expanded it first does not seem relevant.
Pierre-Marie Pédrot said:
We shouldn't have a different class of values for some function types
We certainly should, because the reduction rule for fixpoints is completely different from the one for functions. In particular, no substitution happens until some very specific constraints have been fulfilled.
Fair enough, but then (barring potential soundness issues) we shouldn't have native behave differently
so change native
I am still unconvinced by @Guillaume Melquiond 's argument. Special-casing fixpoints is evil, and we already had issues with parametricity because of that.
Pierre-Marie Pédrot said:
can't we break that with, say, primitive functions?
I don't think so. It has the unfortunate effect that primitive functions will be naturally eta-expanded, but that is about it.
Okay, I now understand better why you complain. The fixpoint has already been eta-expanded before being sent to the OCaml compiler the first time. So, when we recover it after NbE (and before strong normalization), we have no way to know whether this eta-expansion was spuriously introduced by the native compiler or it was already there in the first place for typing reasons.
Last updated: Oct 08 2024 at 15:02 UTC