The MetaCoq website says
Note that because of Gödel’s incompleteness theorem, there is no hope to prove completely the correctness of the specification of Coq inside Coq (in particular strong normalisation or canonicity), but it is possible to prove the correctness of the implementation assuming the correctness of the specification, thus moving from a trusted code base (TCB) to a trusted theory base (TTB) paradigm
I think it's actually possible to do much, much, much better than this.
As I understand it, the current typechecker is written in Gallina, relies on a proof of strong normalisation, and there's a similar proof relying on strong normalisation that there are no proofs of False in the empty context without axioms. (Is there also a proof that strong normalsation implies no proof of False even if you assume funext and/or proof irrelevance, which I think some bits of this proof might?)
I think it should be possible to write a Gallina program that, given a program-to-be-typechecked p
in the environment Σ
(or perhaps just given a description of the global universes and the constraints between them), produces a new well-formed global environment Σ'
and a term q
, together with a proof that strong normalisation in Σ'
implies Σ' ;;; [] |- q : <% NormalisationIn Σ %>
. Moreover, the form of Σ'
should be particularly simple: it should have a fixed set of inductives and constants, and a set of global universes and constraints which depends on the universes and constraints in Σ
.
This would be a formalization of the folklore(?) that you can prove SN of a theory with n universes in a theory with n+1 universes
Jason Gross said:
This would be a formalization of the folklore(?) that you can prove SN of a theory with n universes in a theory with n+1 universes
This fact was proven in MetaCoq/TemplateCoq by Simon Boulier I think. But isn't it much much weaker than what we do?
Where's the proof? (cc @Simon Boulier )
It's much much weaker than what's done in MetaCoq currently, but I think it covers a bit of what MetaCoq doesn't currently do neatly: MetaCoq assumes SN of the theory and proves the theory correct relative to the SN assumption and proves the implementation correct relative to the theory. Showing that a particular restricted subset of Coq with n+1 universes is enough to prove SN of all of Coq with n universes seems like it completes the picture.
Like, we could currently (almost) write a TemplateMonad program that emits an axiom-free proof of well-typedness, just by trying the typechecker with more and more fuel until it succeeds.
But this seems a bit non-constructive, it's hard to tell how much fuel you'll need, and the alternative of axiomatizing SN moves you to a trusted theory base, where you still have to assume an axiom
I'm suggesting that it should be possible to craft an axiom-free proof of SN specific to the program being typechecked.
Furthermore this crafting can be static, so there's no question of how long it'll take or how complicated it'll be.
And finally, we can statically prove that this crafting itself will never fail (i.e., will always result in a well-typed proof of SN), assuming we're willing to assume SN of a stronger theory than the one we're trying to craft a proof for
Does this seem right? Reasonable? Useful? Interesting?
Interesting yes, but I don't think this fact is folklore, and I'm unsure it's true
@Loïc P can probably say something here
(in particular, it doesn't seem clear to me that +1 suffices)
(Having +n for any fixed n, or even any easily computable n, would also be fine.)
That would be more folklore - but in the end I don't think we know how to set up such a proof in the presence of Prop currently
Also @Pierre-Marie Pédrot
"Uniform Logical Relations" by Edwin Westbrook introduces uniform logical relations as a way to prove SN of all of CIC, and it looks like it describes how to meld Prop with a hierarchy of Type
And a quick glance suggests it might be possible to make the proof modular over the handling of sorts?
The only things I know about these proofs are what Loïc, PMP and @nicolas tabareau have told me over coffee :)
(And we can use universe polymorphism to assume a sort that is bigger than any of the sorts we're interested in, without compromising our ability to apply the proof to itself.)
Edwin's proofs seem to be using ZFC, so I'm unsure how we'll they transport
I think they use ZFC to get access to recursion strong enough to handle inductive types + a way of getting a cumulative hierarchy of universes?
But why ZFC and not ZF? Or CZF?
/ IZF
The new POPL paper by Nicolas and Loïc is for sure a good reference (it's on the Gallinette team website already)
Ah, and @Meven Lennon-Bertrand is currently porting the framework from agda to Coq
On ZF vs ZFC, the paper says:
ZFC is known to have the same proof-theoretic strength as ZF.
and
It has been shown by Aczel [1], however, that constructive ZF (CZF), without the axiom of the excluded
middle, can be encoded in CiC, and so the proof-theoretic strength of CZF is no more than that of CiC.
Also, if we add excluded middle for each sort s (EMs) to CiC, then the work of Werner [19] shows that ZF can be encoded, where excluded middle for sort s has the following type [...] In fact,
Werner shows that ZF with i inaccessible cardinals (ZFi) can be encoded in CiC with EMTypei+2
They might have the same strength, but inlining this fact into uses of classical logic in the SN proof is not going to be nice (if it's even possible, maybe the translation is too global to make that possible)
Hi! As Yannick said, I have been thinking quite a bit about the ways to prove normalization of type theory in type theory recently.
In the case of predicative type theories, I think that it sounds very reasonable. A small modification of Abel et al.'s proof is enough to show head normalization for a predicative theory with eta, natural numbers and n universes in Agda with n+4 universes (A formal proof can be found here https://github.com/loic-p/logrel-mltt/ ).
There are two small caveats:
Note that the n+4 is probably not optimal, I was not very careful. There is ongoing work with Meven Bertrand and Arthur Adjedj to have a cleaner version in Coq.
The case of theories with impredicative sorts is less nice, as far as I know.
I did not know about the existence of the paper by Edwin Westbrook that Jason Gross mentioned, though! After skimming it, it seems that this paper does not handle η conversion, and I am a bit puzzled by the claim that it validates proof irrelevance. Also, as far as I know this kind of reducibility model is quite hard to reproduce in type theory, if you do not assume supplementary axioms such as UIP.
the proof uses explicit manipulation of universe levels, which is not legal in Coq.
What manipulation is hard to do? Coq supports some basic manipulation of any finite fixed number of universe levels (e.g., I can write Definition foo : Type@{Set+1} := Set.
), and if the universe-level manipulation can be factored out of the proof, then it should be possible to emulate much of Agda's universe-level manipulation by quoting of the proof (that is, instead of proving ∀ (ℓ : Level), P ℓ
, write qP : nat -> Ast.term
such that qP ℓ
is the quotation of P ℓ
, and ∀ (ℓ : nat), { p & Σ;;; Γ ⊢ p : qP ℓ }
).
What manipulation is hard to do?
There are several definitions whose universe level is computed from an inductive parameter, e.g. something with type (b : Bool) → Set (if b then ℓ0 else ℓ1)
. But in the case of that specific proof, I have absolutely no doubt that it can be factored out, with possibly a bit of code duplication.
(Or some clever tactics/meta-programming!)
Side remark: While your work proves SN for a type theory with n universes in Agda with n+4 universes, it would be extremely interesting (but hard) to turn this into a proof of SN for a type theory with n universes in _the same type theory_ with n+4 universes
I.e., push the SN proof to the object level. This would in particular allow treating arbitrary inductives - but as I said, hard and technical
Agreed. I kind of want to experiment with this in the Coq version!
I seem to recall that Edwin's proof was not entirely complete (I don't think it was ever published).
Yannick Forster said:
I.e., push the SN proof to the object level. This would in particular allow treating arbitrary inductives - but as I said, hard and technical
For MetaCoq, this would require at least the following two theorems:
From MetaCoq.Template Require Import config utils All.
Theorem reduction_quotation `{checker_flags} (Σ : global_env_ext) t t' :
red Σ [] t t' -> { p & Σ;;; nil |- p : <% red Σ [] t t' %> }.
Admitted.
Theorem typing_quotation `{checker_flags} Σ t A :
Σ;;; nil |- t : A -> { p & Σ;;; nil |- p : <% Σ;;; nil |- t : A %> }.
Admitted.
@Jason Gross do you see a path to get them based on what you have done the last weeks?
@Yannick Forster the first projection of typing_quotation
is done here. It relies on one admit (still waiting to get help on how to decide consistent_extension_on
/ how to modify things to bring the existing semidecider in line with the spec, see this code and this thread). This is all in template, but it should be relatively easy to port to PCUIC once #790 , #789, and eventually #776 are merged.
The second projection is going to be trickier. There are a few obstacles:
Sigma will need to have enough stuff in it for quotation. Most of the stuff is unobjectionable, but I'm pretty concerned about the axioms that are used in MetaCoq in conjunction with all the assumptions about axiom-free-ness. For sure we can't be assuming SN of the complete theory at the object level and the meta level. This is why I want to turn SN into a typeclass. Furthermore, my next planned change (after the SN typeclass and the branch currently in the works that allows permuting Sigma) is to allow fueling the safe checker in a way that allows bypassing the SN assumption on concrete terms without fully normalizing the typing proof, so that I can just safecheck the quotation of various ground subterms. I'd also like to see proof irrelevance/funext disappear from the various equality deciders in MetaCoq. Finally (wrt axioms), any theorems we need that assume axiom-free will need to be extended to handle the axioms used in retroknowledge / the implementation of primitive integers, arrays, and floats, since all the theorems seem to depend on those axioms.
I plan to look into finding a way to unsquash typing judgments without needing to fully normalize them.
Proving typing for quoting recursive types (starting at nat
and list
) is going to be a pain. I don't yet have a plan for doing this systematically in a way that doesn't crawl to a halt. Still turning this over in the back of my head. Certainly doable in principle, though. Maybe I can implement a templatemonad bidi typechecker that uses the safechecker to fill in the proofs on ground subterms?
Btw, is there a translation that inlines a specified subset of constants in the global environment? (I'd like to be able to use helper definitions without needing to demand that their quotation be present in Sigma)
I've been thinking about this stuff too, so here is my current take on it.
1) As others said, we are currently porting (Loïc's version of) the proof of normalization of type theory in type theory by Abel in Agda. This is still ongoing, and there are subtleties to remove all the fancy things that Agda lets you do, but I have reasonably small doubts that we can have a first version out soon, for the simplest fragment you can think of (λΠ, one universe).
2) Once this is done, we can try to bridge the gap with MetaCoq, and prove the normalization axiom for a small fragment of the theory, getting a convincing argument that the axiom is at least "reasonable". This is non-trivial though, typically because currently MetaCoq relies fundamentally on its conversion being untyped, while the proof of normalization is on a system with typed conversion. While these look quite close, showing the equivalence of both is a tricky question (in the case of PTS already, there was a whole string of papers on this in the early 2000 by Adams, Silès and Herbelin). I think the best solution here would be to try and move MetaCoq's specification to typed conversion, which would incidentally make it much easier to treat η-laws. But there is interesting type theoretic research on how to do typed parallel reduction at the scale of a system like PCUIC to be done first.
3) If are able to do 2), the question is of course how we go further, and scale the proof of normalization from a toy system to PCUIC. I don't think that trying to scale the proof to the whole of PCUIC is a good idea, because PCUIC is so complicated. Rather, we should be trying to compile PCUIC to a simpler fragment, typically removing fix+match in favour of eliminators, encoding away complicated inductive types into a finite set of simpler ones, maybe get the universes simpler… So that at some point we have reduced down to a theory simple enough that we can manage a proof of normalization there. The nice part about this is that it is largely independent of a normalization proof. This means we can start working on it even if 2) is still WIP, and this can help tackle interesting problems, such as giving a specification to the guard condition or positivity criterion for inductive types. Basically, we can independently reduce the trusted axiom of normalization, and attempt more and more ambitious normalization proofs, and hopefully at some point they can meet in the middle…
If MetaCoq moves to typed conversion, but Coq sticks to untyped, how would you bridge the difference?
Nothing says the specification has to be untyped. It probably should be typed actually. The algorithm should remain untyped however.
Yes exactly: the algorithm as is is perfectly fine, and should stay the same. Only the specification could/should change
the alg must be incomplete then? (At least on eta for nullary records) and the soundness proof needs to bridge untyped vs typed?
The specification can just forbid eta on nullary records, right? Just because we change the specification to use types doesn't mean that we're forced into supporting everything in the spec that a typed conversion algorithm could support
Yeah I would say the hope is to remain complete but to delimit what we achieve in a clearer / more principled way.
Last updated: Dec 07 2023 at 09:01 UTC