Stream: Miscellaneous

Topic: terminating type checker with untyped conversion and fuel


view this post on Zulip walker (Jan 10 2023 at 14:23):

Is anyone aware of any literature about implementing type checking/inference algorithm for CoC (or variant) with untyped conversion and using fuel (for purposes of termination) ? I am mostly interested in the proofs of soundness and completeness, I have seen an implementation in MetaCoq, but it was not verified, and I think this is really the tricky part.

view this post on Zulip Karl Palmskog (Jan 10 2023 at 14:25):

did you see https://github.com/coq-contribs/coq-in-coq and its accompanying paper?

view this post on Zulip walker (Jan 10 2023 at 14:41):

wow, Lemma type_sn : forall e t T, typ e t T -> sn T. I wonder how they did that!

view this post on Zulip walker (Jan 10 2023 at 14:49):

I skimmed the manuscript quicktly, and I think their idea was to model coq in some form of sets and use that model to show it is normalizing and then use the normalization result to provide dependability, which doesn't sound like a nice idea at least for my case.

view this post on Zulip walker (Jan 10 2023 at 14:51):

i just want to completely avoid dealing with that and just work with partial correctiness using fuels, but I couldn't find good layouts for the proofs.

view this post on Zulip Meven Lennon-Bertrand (Jan 10 2023 at 17:49):

I think you can adapt the general idea of MetaCoq to the fuel setting: in case you return "yes", you also return a derivation (that you should have build on the way), and in case you return "no", you should construct a proof that your term is not typable. The only difference is that now you have a third case, where you run out of fuel. This gives you soundness of both kinds of answers.

view this post on Zulip Meven Lennon-Bertrand (Jan 10 2023 at 17:53):

However, completeness is probably not provable if you do not prove normalization. For instance, if you phrase completeness as if Γtt\Gamma \vdash t \cong t', then there exists some nn such that given nn fuel my conversion test answers "yes", this implies normalization. Indeed, you can show that Γtt\Gamma \vdash t \cong t whenever tt is typable, but then completeness implies that the conversion algorithms (which normalizes the terms given as input) will terminate on tt in finite time, effectively showing that tt is normalizing.

view this post on Zulip walker (Jan 10 2023 at 18:39):

I see, this is more tricky than I thought, I am now wondering if assuming normalization is the most sensible choice

view this post on Zulip Paolo Giarrusso (Jan 10 2023 at 21:51):

why do you need completeness?

view this post on Zulip Paolo Giarrusso (Jan 10 2023 at 22:01):

(IMHO?) incomplete typecheckers can be fine, unless they're too limited. A proof won't tell you that your typechecker works in say 99.999% of cases so it's good enough in practice.

For instance, if you misuse C++ templates or Coq typeclasses, it's easy to cause infinite loops which make things incomplete.

A C++ compiler typically has a fixed maximal fuel for templates (say, 100), then is stops.

Coq is more generous and keeps running, but at some point the user will press Ctrl-C and _then_ it stops :-).

And even for the core theory, conversion checking isn't polynomial, and this can be a problem in practice. Again, Ctrl-C helps.

In both cases, a "correct" program fails to typecheck, but even if it's "correct", you're still going to fix it — maybe after asking questions here to understand how that's possible. And this isn't something that completeness can avoid!

view this post on Zulip walker (Jan 10 2023 at 22:11):

well I used to see things different, I thought that the whole point of having proof is to make sure that all grounds are covered, and at least when I started, soundness and completeness felts like the two most basic properties every system must have

view this post on Zulip walker (Jan 10 2023 at 22:12):

but I now see and start to agree with your point

view this post on Zulip Paolo Giarrusso (Jan 11 2023 at 03:52):

I should point out: I suspect the _consensus_ agrees with you, and the one above is a less standard position

view this post on Zulip Paolo Giarrusso (Jan 11 2023 at 03:53):

but you're not proving normalization, so you're not doing the standard thing anyway :wink: .

view this post on Zulip walker (Jan 11 2023 at 09:31):

I am slightly confused, I thought that formal implementations never proved normalization ? due to godel incompleteness, is that wrong ?

view this post on Zulip Meven Lennon-Bertrand (Jan 11 2023 at 10:11):

walker said:

I am slightly confused, I thought that formal implementations never proved normalization ? due to godel incompleteness, is that wrong ?

Formalizing normalization proofs, especially for dependently typed language, is still a daunting task, so there are not many examples of it. But as long as your meta-theory (the one of the proof assistant you use) is logically stronger than the target theory (the one whose normalization you are proving), you in principle can formalize a normalization proof. See for instance this formalization, where they use induction-recursion in the meta-theory to formalize normalization of a theory that does not have it (although, they do not implement a type-checker, only a conversion-checker), @Loïc P's refinement which uses a meta-theory with more universe levels than the target language, or Coq in Coq, which formalizes normalization for CoC in full-blown CIC (more universes, and inductive types).

view this post on Zulip Meven Lennon-Bertrand (Jan 11 2023 at 10:15):

Paolo Giarrusso said:

(IMHO?) incomplete typecheckers can be fine, unless they're too limited. A proof won't tell you that your typechecker works in say 99.999% of cases so it's good enough in practice.

I think a good middle ground, which is more refined than simply letting go of completeness is to show that normalization is the only issue preventing the system to be complete. I don't think this has been explored a lot, but for instance there is A Partial Type Checking Algorithm for Type : Type, which shows that the type-checking algorithm in complete in a coinductive sense, which more or less corresponds to this idea.

view this post on Zulip walker (Jan 11 2023 at 14:27):

I am slightly even more confused here, I am not sure, how can reduction algorithm be complete with respect to small step reduction,

In single step reduction, we have the following:

Inductive Reduction: Term -> Term -> Prop :=
...
| Reduction_beta: forall A t n, (appl (lam A t) n)  t.[n/]
| Reduction_appll: forall m m' n, m  m' -> (appl m n)  (appl m' n)
| Reduction_applr: forall m n n', n  n' -> (appl m n)  (appl m n')
...
where "t1 '⟶' t2" := (Reduction t1 t2).

That mean I can construct an a term that is possible to be reduced on all three fronts, and the reduction predicate does not enforce one or the other, but the algorithm must do so, which means that the reduction algorithm for single step may be sound, but I cannot prove it is complete at all, because it must pick an order for all those three possible steps.

When I say sound and complete this is what I have in mind:

Lemma reduction_sound t1 t2: reduction t1 = Some t2 -> t1  t2.
Admitted. (*proved outside*)
Lemma reduction_complete t1 t2:  t1  t2 -> reduction t1 = Some t2.
Admitted. (*apparently does not hold because the reduction must pick an order *)

I thought that that means for multi step reduction it will get better, but it does not because Multistep reduction relation does not require progress to be made at all:

Inductive Multi (R: Term -> Term -> Prop): Term -> Term -> Prop :=
| Multi_refl: forall t, Multi R t t
| Multi_single: forall t1 t2 t3,
    R t1 t2 ->
    Multi R t2  t3 ->
    Multi R t1  t3.

On the other hand, the algorithm equivalent will try to get the best possible reduct, I am worried that those problems will keep propagating, and realize very late that I cannot prove completeness (or partial completeness) because I missed something important

So how can completeness lemmas be formalized for those two cases?

view this post on Zulip Meven Lennon-Bertrand (Jan 11 2023 at 16:24):

It depends on what exactly your reduction algorithm implements, but the simplest form of completeness is something like

Lemma reduction_complete t1 t2 : Multi Reduction t1 t2 -> normal_form t2 -> reduction t1 = t2

ie whenever t reduces to a normal form, the reduction algorithm finds that normal form.

view this post on Zulip walker (Jan 11 2023 at 16:33):

right!, it makes sense, I think this is provable (taking into account dealing with fuel and having partial results ..etc)
but my multi_reduce function is very simple:

Equations multi_reduce (fuel: nat) (t: Term) : Result Term :=
multi_reduce 0 _ := Err OutOfFuel;
multi_reduce (S fuel) t := match reduction t with
                        | None => Ok t
                        | Some t' => multi_reduce fuel t'
                        end.

it should just go to normal form, the reduction function is like the default reduction inductive type but with just setting some arbitrary order.

view this post on Zulip walker (Jan 11 2023 at 16:34):

I think for my case, the lemma would be something like:

Lemma reduction_complete fuel t1 t2 : Multi Reduction t1 t2 -> normal_form t2 -> multi_reduction t1 =Some  t2 \/ multi_reduction t1 = Err OutOfFuel.

If there is something I should also take into account feel free to let me know!

view this post on Zulip Meven Lennon-Bertrand (Jan 11 2023 at 16:39):

This looks good to me :)

view this post on Zulip Guillaume Melquiond (Jan 11 2023 at 16:40):

I don't think this is an interesting result, because the function that always returns OutOfFuel satisfies it. You should instead prove that there exists enough fuel to actually end up with t2.

view this post on Zulip Guillaume Melquiond (Jan 11 2023 at 16:41):

(Possibly, you might need the excluded middle, or prove the converse.)

view this post on Zulip walker (Jan 11 2023 at 16:42):

mmm, I see your point, but that is strong normalization, which I wanted to avoid proving ?

view this post on Zulip walker (Jan 11 2023 at 16:43):

I am intending to add features to my experimental toy language that might be stronger than what coq can offer (HITs) so I want to avoid worrying about normalization

view this post on Zulip Guillaume Melquiond (Jan 11 2023 at 16:44):

No, you are supposing normal_form t2, so this is much weaker than strong normalization. It is just plain confluence toward the normal form.

view this post on Zulip walker (Jan 11 2023 at 16:45):

isn't strong normalization saying (at least informally) that every well typed expression will eventually be reduced to normal form ?

view this post on Zulip Guillaume Melquiond (Jan 11 2023 at 16:46):

Yes.

view this post on Zulip walker (Jan 11 2023 at 16:48):

isn't that the same as saying that there exists enough fuel to get there ? (which you just described)

view this post on Zulip Guillaume Melquiond (Jan 11 2023 at 16:53):

No, what I am describing is that, if a term can be reduced to a normal form, your algorithm will eventually also reach that normal formal. And this is just a corollary (by definition of a normal norm) of the fact that, if a term can be reduced to a normal form, your algorithm cannot exhibit an infinite sequence of reductions.

view this post on Zulip walker (Jan 11 2023 at 16:54):

I see, I wonder how hard will it be to prove that, I will actually try it and see where I get stuck!

view this post on Zulip walker (Jan 11 2023 at 16:54):

thanks a lot for the idea!

view this post on Zulip Meven Lennon-Bertrand (Jan 11 2023 at 17:39):

Ah, indeed, you're right, thanks for the catch!

view this post on Zulip Paolo Giarrusso (Jan 11 2023 at 18:29):

If reduction were deterministic, the needed fuel would just be the length of the multi reduction sequence. OTOH, with non-determinism and a system that is WN but not SN (say, naive explicit substitutions + beta) I'm not sure how Guillaume's proof works because you can exhibit an infinite reduction sequence?

view this post on Zulip Paolo Giarrusso (Jan 11 2023 at 18:30):

You need instead to prove that your algorithm avoids that infinite sequence.

view this post on Zulip Paolo Giarrusso (Jan 11 2023 at 18:33):

I'm not sure you can, and I don't know myself for explicit substitutions + beta (not an expert). OTOH, untyped lambda calculus doesn't have normalization, but this theorem is false for call-by-value evaluation and true for lazy evaluation (when not reducing under lambdas)

view this post on Zulip Paolo Giarrusso (Jan 11 2023 at 18:35):

and when reducing under lambdas, IIRC/IIUC the result can be proven via the standardization theorem for reduction in normal order

view this post on Zulip Guillaume Melquiond (Jan 11 2023 at 18:50):

Paolo Giarrusso said:

You need instead to prove that your algorithm avoids that infinite sequence.

Note that is exactly what I said: "this is just a corollary [...] of the fact that [...] your algorithm cannot exhibit an infinite sequence of reductions."

view this post on Zulip Paolo Giarrusso (Jan 11 2023 at 18:59):

I'm not sure I follow because in these cases the infinite sequence exists:

view this post on Zulip Paolo Giarrusso (Jan 11 2023 at 19:00):

you can have t1 ->* t2, t2 normal, yet there exists an infinite reduction sequence starting from t1

view this post on Zulip Guillaume Melquiond (Jan 11 2023 at 19:01):

Sure, but can the algorithm follow it? You have to prove that it cannot. That is my point.

view this post on Zulip Paolo Giarrusso (Jan 11 2023 at 19:02):

okay, we're in agreement

view this post on Zulip Paolo Giarrusso (Jan 11 2023 at 19:02):

I was thrown off by "confluence toward the normal form"

view this post on Zulip Guillaume Melquiond (Jan 11 2023 at 19:08):

What I mean by that is that you have to prove that, if two sequences of reduction lead to irreducible terms, then these terms are actually equal. (In other words, this is the existence of a normal form.) For example, in an untyped lambda-calculus with explicit substitutions, this is not always true. You can only prove that the two irreducible terms are equivalent modulo erasure of the explicit substitutions (or you need to add some extra reduction rules to your calculus).

view this post on Zulip Paolo Giarrusso (Jan 11 2023 at 19:23):

okay, that sounds like confluence. I thought "normal" and "irreducible" coincided...

view this post on Zulip Paolo Giarrusso (Jan 11 2023 at 19:23):

it all checks out

view this post on Zulip walker (Jan 15 2023 at 12:49):

So here is my progress regarding proving

Lemma multi_reduce_complete t1 t2: t1 * t2 -> NormalForm t2 -> exists fuel,  multi_reduce fuel t1 = Ok t2.

This property was a direct consequence of another lemma:

Lemma multi_reduce_preservation fuel t1 t2 t3:
    multi_reduce fuel t2 = Ok t3 -> t1  t2 -> exists fuel', multi_reduce fuel' t1 = Ok t3.

Here I processed by induction over the stucture the single stepping judgement, and I was stuck at one case regarding the beta reduction:

In the judgement, the beta reducttion rule is given as follows:

Inductive Reduction: Term -> Term -> Prop :=
...
| Reduction_beta: forall A t n, (appl (lam A t) n)  t.[n/]
...

The problem is that the induction hypthesis is not strong enough and during in multi_reduce_preservation I get a case that i:

t1, t2, n, t3: Term
fuel: nat
Hmred: multi_reduce fuel t2.[n/] = Ok t3
======================================  Goal
exists fuel' : nat, multi_reduce fuel' (appl (lam t1 t2) n) = Ok t3

The problem here is that unlike the induction principle for pi, lam and other constructs, we canoot establish any fact about t1, t2 or t3 on it own, and the substitution presented by t2.[n/] is not a primative construct but an operation

The closest thing I was able to do is prove that :

Lemma multi_reduce_trans_appl_beta A f m n n' t fuelm fueln fuelapp:
multi_reduce fuelm m = Ok (lam A f) ->
multi_reduce fueln n = Ok n' ->
multi_reduce fuelapp (f.[n'/]) = Ok t ->
exists fuel, multi_reduce fuel (appl m n) = Ok t.

But to use that, I need to know if what each Term reduces to on its own and that was not available in case of beta reduction. It will cool if anyone has idea possible paths on how to proceed.

view this post on Zulip Gaëtan Gilbert (Jan 15 2023 at 12:57):

this depends on the definition of multi_reduce

view this post on Zulip Gaëtan Gilbert (Jan 15 2023 at 12:57):

we canoot establish any fact about t1, t2 or t3 on it own

but is it necessary to have such facts?

view this post on Zulip walker (Jan 15 2023 at 13:01):

Yeah I tried proving them from multi_reduce fuelapp (f.[n'/]) = Ok t By induction over fuel, but it never worked.

view this post on Zulip walker (Jan 15 2023 at 13:01):

let me retry quickly and get the problem.

view this post on Zulip walker (Jan 15 2023 at 13:12):

yup, here we go, I will have to also share how multi_reduce work:

Equations multi_reduce (fuel: nat) (t: Term) : Result Term :=
multi_reduce 0 _ := Err OutOfFuel;
multi_reduce (S fuel) t := match reduce t with
                            | None => Ok t
                            | Some t' => multi_reduce fuel t'
                            end.

So if I try deduce one of the two facts from the substution:

forall f n t fuel,
multi_reduce fuel (f.[n/]) = Ok t ->
exists fueln n', multi_reduce fueln n = Ok (n').

I proceeded by induction over fuel (nat), the first case istrivial because the assumption becomes: Err OutOfFuel = Ok t

The second case is:

multi_reduce fuel.+1 f.[n/] = Ok t ->
exists (fueln : nat) (n' : Term), multi_reduce fueln n = Ok n'

While the induction hypothesis is:

IHfuel: forall f n t : Term, multi_reduce fuel f.[n/] = Ok t -> exists (fueln : nat) (n' : Term), multi_reduce fueln n = Ok n'

Which is bad sign, becaue the second argument for multi_reduce must always be f.[n/]

But I proceed anyways by simplification and I get:

match reduce f.[n/] with
| Some t' => multi_reduce fuel t'
| None => Ok f.[n/]
end = Ok t ->
exists (fueln : nat) (n' : Term), multi_reduce fueln n = Ok n'

I case analyse reduce f.[n/] and one of the cases looks like ( I didn't bother checking the other case):

multi_reduce fuel red' = Ok t ->
exists (fueln : nat) (n' : Term), multi_reduce fueln n = Ok n'

This does not match the induction hypothesis, and no way to make look like the x.[y\] pattern.

view this post on Zulip walker (Jan 15 2023 at 13:13):

ofcourse the obvious solution would have the lemma with 1 layer of indirection:

forall f n t fuel X,
multi_reduce fuel  = Ok t ->
X = (f.[n/]) ->
exists fueln n', multi_reduce fueln n = Ok (n').

but you get the same problem but in a slightly different form.

view this post on Zulip walker (Jan 15 2023 at 13:15):

this unfortunately one of the cases, where I just run out of ideas to try and wonder if there is a new tricks I can add to my toolbox for this particular case.

view this post on Zulip walker (Jan 15 2023 at 13:17):

I can share the whole code for anyone who would be interested.

view this post on Zulip Gaëtan Gilbert (Jan 15 2023 at 13:19):

but in the original goal

t1, t2, n, t3: Term
fuel: nat
Hmred: multi_reduce fuel t2.[n/] = Ok t3
======================================  Goal
exists fuel' : nat, multi_reduce fuel' (appl (lam t1 t2) n) = Ok t3

shouldn't reduce (appl (lam t1 t2) n) = Some t2.[n/]? so you can just do exists (S fuel); exact Hmred

view this post on Zulip walker (Jan 15 2023 at 13:20):

Not really, Let me show you the definition of reduce:

view this post on Zulip Gaëtan Gilbert (Jan 15 2023 at 13:21):

and you never care about multi_reduce _ n

view this post on Zulip walker (Jan 15 2023 at 13:21):

| appl m n => match reduce m with
            | Some m' => Some (appl m' n)
            | None => match reduce n with
                      | Some n' => Some (appl m n')
                      | None => match m with
                                | lam A e => Some (e.[n/])
                                | _ => None
                                end
                      end
            end

view this post on Zulip Gaëtan Gilbert (Jan 15 2023 at 13:22):

change it to do the match m first, problem solved?

view this post on Zulip walker (Jan 15 2023 at 13:22):

I first reduce m thenreduce n then perform the subsitution if reduction of m was lam _ _

view this post on Zulip walker (Jan 15 2023 at 13:23):

I thought about that, but my intution tell me that this problem will appear again when I need to do some operations on mand n because now i will have to get through the reduction first.

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 13:24):

Isn't the point of multi_reduce is that reduce does a single step?

view this post on Zulip walker (Jan 15 2023 at 13:25):

yes it does ?

view this post on Zulip walker (Jan 15 2023 at 13:25):

does it not ?

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 13:25):

I mean, reduce shouldn't be recursive, it should do a single small step

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 13:26):

Not saying you won't get other questions, but if the new setup makes more sense they'll be better questions

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 13:26):

I say if because it's up to you to choose the general setup

view this post on Zulip walker (Jan 15 2023 at 13:26):

I am not sure how it will not be recursive.

view this post on Zulip walker (Jan 15 2023 at 13:27):

let me show you the judgement part and the algrorithm

view this post on Zulip walker (Jan 15 2023 at 13:27):

Reserved Notation "x '⟶' y" (at level 20, y at next level).
Inductive Reduction: Term -> Term -> Prop :=
| Reduction_beta: forall A t n, (appl (lam A t) n)  t.[n/]
| Reduction_pil: forall A A' B, A  A' -> (pi A B)  (pi A' B)
| Reduction_pir: forall A B B', B  B' -> (pi A B)  (pi A B')
| Reduction_laml: forall A A' t, A  A' -> (lam A t)  (lam A' t)
| Reduction_lamr: forall A t t', t  t' -> (lam A t)  (lam A t')
| Reduction_appll: forall m m' n, m  m' -> (appl m n)  (appl m' n)
| Reduction_applr: forall m n n', n  n' -> (appl m n)  (appl m n')
where "t1 '⟶' t2" := (Reduction t1 t2).
Notation "x '⟶*' y" :=
    (Multi Reduction x y) (at level 20, y at next level).


Fixpoint reduce (t: Term) : option Term :=
match t with
| var idx => None
| univ l => None
| pi A B => match reduce A with
            | Some A' => Some (pi A' B)
            | None => match reduce B with
                      | Some B' => Some (pi A B')
                      | None => None
                      end
            end
| lam A t => match reduce A with
            | Some A' => Some (lam A' t)
            | None => match reduce t with
                      | Some t' => Some (lam A t')
                      | None => None
                      end
            end
| appl m n => match reduce m with
            | Some m' => Some (appl m' n)
            | None => match reduce n with
                      | Some n' => Some (appl m n')
                      | None => match m with
                                | lam A e => Some (e.[n/])
                                | _ => None
                                end
                      end
            end
end.

view this post on Zulip Gaëtan Gilbert (Jan 15 2023 at 13:27):

IIRC you're in a possibly non terminating system
I think that means you have to do the match m first
consider loop a term with no normal form and n some closed normal form
by beta appl (lam n n) loop reduces to n which is normal
but by your algorithm multi_reduce will keep working on loop and never finish

view this post on Zulip Gaëtan Gilbert (Jan 15 2023 at 13:28):

Paolo Giarrusso said:

I mean, reduce shouldn't be recursive, it should do a single small step

it's recursive because it does subterms

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 13:30):

okay, I agree with Gaëtan

view this post on Zulip Gaëtan Gilbert (Jan 15 2023 at 13:31):

basically you implemented some kind of "call by value with eager reduction under binders" but you need call by name

view this post on Zulip walker (Jan 15 2023 at 13:32):

Gaëtan Gilbert said:

I think that means you have to do the match m first
consider loop a term with no normal form and n some closed normal form
by beta appl (lam n n) loop reduces to n which is normal
but by your algorithm multi_reduce will keep working on loop and never finish

I will try since this is the onyl option left, but I feel it will not work when doing the general case of appl m n. my intution is based on the fact that this reduction system is confluent so order of operations should not matter.

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 13:33):

no, confluence doesn't help here

view this post on Zulip walker (Jan 15 2023 at 13:33):

Gaëtan Gilbert said:

basically you implemented some kind of "call by value with eager reduction under binders" but you need call by name

This is one the things I never understood, It is on my to-learn-about list, if there is an good survery that the goto I will be happy if you let me know about it.

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 13:34):

untyped lambda calculus is confluent, yet we know call-by-name finds strictly more normal forms

view this post on Zulip walker (Jan 15 2023 at 13:34):

as for the order of operations I will be super happy to learn that my intution was wrong, I will try it and let you know the end result.

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 13:35):

you should work through Gaëtan's example before doing that

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 13:36):

the proof might still run into technical challenges, but I think he simply gave a counterexample

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 13:37):

you can also tell from the statement of confluence: it doesn't guarantee that every order will find the normal form, only that whatever sequences of steps you take, there exists another sequence that will get to a common reduct

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 13:39):

and that's why strong normalization and weak normalization are different properties, and WN does not imply SN

view this post on Zulip walker (Jan 15 2023 at 13:39):

I have 1 last question, what if what i want is call by value ?

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 13:40):

then it's false that your algorithm will always find a normal form that exists

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 13:41):

you could make that true by weakening the overall reduction, but it'd be very strange in a dependently-typed language?

view this post on Zulip walker (Jan 15 2023 at 13:44):

I am worried that call by name might not be what I really want, because my goal was at somepoint was to add some sort of side effects & be able to generate assembly instructions, and what worries me is that every language that have the same goal (but not necessarily dependent types) are call by value.

view this post on Zulip walker (Jan 15 2023 at 13:45):

anyways, I will try to get it to work in call by name dealing side effect/assembly will be problems for future me.

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 13:47):

as a side note — this helper lemma might have a common problem that you might want to learn about:

forall f n t fuel,
multi_reduce fuel (f.[n/]) = Ok t ->
exists fueln n', multi_reduce fueln n = Ok (n').

if f does not mention var 0, then f.[n/] does not contain n, so you can't deduce much about n?

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 13:51):

more generally, there exists research on dependent types + effects that you might need to look into

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 13:52):

I've found https://ieeexplore.ieee.org/abstract/document/8005113 https://dl.acm.org/doi/abs/10.1145/3341712 https://arxiv.org/abs/1512.08009, but I'm not a specialist so I'm not sure what's the best reference.

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 13:54):

But one point I can recognize: two of those papers mention call-by-push-value, which is a technique to get _both_ full reduction, while sequencing effects like in call-by-value

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 13:58):

ah, https://www.reddit.com/r/dependent_types/comments/dq1zlt/side_effects_and_dependent_types/ recommends instead https://www.xn--pdrot-bsa.fr/articles/dcbpv.pdf — I can't find later papers by @Pierre-Marie Pédrot that seem relevant

view this post on Zulip walker (Jan 15 2023 at 14:02):

thanks! I will read through them

view this post on Zulip walker (Jan 15 2023 at 14:15):

to the best of my knowledge, the approach for algebraic side effects is not the best when it comes to proving stuff about them (which is I think the whole point of dependent types) so when I was saying side effects, I had monads in mind, but I will still read the papers, it is good idea to to familiarize myself with call-by-push-value.

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 14:51):

I think CBPV predates algebraic effects, but even monads are fine with call-by-name

view this post on Zulip Paolo Giarrusso (Jan 15 2023 at 14:55):

(but I'm not implying monad would mix well with dependent types, so it still seems good reading)

view this post on Zulip walker (Jan 15 2023 at 17:21):

I am reading and I see a "stack", by any chance, is that the stack used in metacoq's type checker ?
I know that metacoq's typechecker uses a stack for something but never knew why!

view this post on Zulip Pierre-Marie Pédrot (Jan 15 2023 at 21:32):

FWIW, dependent types are heavily biased towards call-by-name.

view this post on Zulip Pierre-Marie Pédrot (Jan 15 2023 at 21:33):

If you go through the cbv-into-cpbv encoding with dependent cbpv you'll get a weird stuff probably nobody wants to consider.

view this post on Zulip Pierre-Marie Pédrot (Jan 15 2023 at 21:34):

If your plan is to get a programming language you'd probably better follow the PML path.

view this post on Zulip walker (Jan 15 2023 at 21:42):

I literally read the first line and it says, The language is Curry-style and call-by-value. now I am confused as to which one is the go-to for dependent types ?

view this post on Zulip Pierre-Marie Pédrot (Jan 15 2023 at 21:55):

it's technically not based on dependent types, although the surface language looks a lot like it

view this post on Zulip Pierre-Marie Pédrot (Jan 15 2023 at 21:56):

it's closer to systems from the PRL family

view this post on Zulip Pierre-Marie Pédrot (Jan 15 2023 at 21:57):

(you can argue that PRL is a model of dependent types, but there not strictly speaking your off-the-shelf implementation of MLTT)


Last updated: Mar 28 2024 at 22:01 UTC